00:00:00.000 Started by upstream project "autotest-per-patch" build number 120480 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.155 Fetching changes from the remote Git repository 00:00:00.156 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.189 Using shallow fetch with depth 1 00:00:00.189 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.215 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.215 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.755 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.765 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.775 Checking out Revision 27f13fcb4eea6a447c9f3d131408acb483141c09 (FETCH_HEAD) 00:00:05.775 > git config core.sparsecheckout # timeout=10 00:00:05.785 > git read-tree -mu HEAD # timeout=10 00:00:05.800 > git checkout -f 27f13fcb4eea6a447c9f3d131408acb483141c09 # timeout=5 00:00:05.841 Commit message: "docker/pdu_power: add PDU APC-C14 and APC-C18" 00:00:05.842 > git rev-list --no-walk 27f13fcb4eea6a447c9f3d131408acb483141c09 # timeout=10 00:00:05.941 [Pipeline] Start of Pipeline 00:00:05.954 [Pipeline] library 00:00:05.955 Loading library shm_lib@master 00:00:05.955 Library shm_lib@master is cached. Copying from home. 00:00:05.966 [Pipeline] node 00:00:20.968 Still waiting to schedule task 00:00:20.968 Waiting for next available executor on ‘vagrant-vm-host’ 00:11:13.285 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest 00:11:13.286 [Pipeline] { 00:11:13.299 [Pipeline] catchError 00:11:13.301 [Pipeline] { 00:11:13.317 [Pipeline] wrap 00:11:13.328 [Pipeline] { 00:11:13.337 [Pipeline] stage 00:11:13.339 [Pipeline] { (Prologue) 00:11:13.360 [Pipeline] echo 00:11:13.362 Node: VM-host-SM4 00:11:13.368 [Pipeline] cleanWs 00:11:13.378 [WS-CLEANUP] Deleting project workspace... 00:11:13.378 [WS-CLEANUP] Deferred wipeout is used... 00:11:13.384 [WS-CLEANUP] done 00:11:13.549 [Pipeline] setCustomBuildProperty 00:11:13.620 [Pipeline] nodesByLabel 00:11:13.621 Found a total of 1 nodes with the 'sorcerer' label 00:11:13.629 [Pipeline] httpRequest 00:11:13.633 HttpMethod: GET 00:11:13.634 URL: http://10.211.164.101/packages/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:11:13.642 Sending request to url: http://10.211.164.101/packages/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:11:13.643 Response Code: HTTP/1.1 200 OK 00:11:13.644 Success: Status code 200 is in the accepted range: 200,404 00:11:13.644 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:11:13.782 [Pipeline] sh 00:11:14.064 + tar --no-same-owner -xf jbp_27f13fcb4eea6a447c9f3d131408acb483141c09.tar.gz 00:11:14.084 [Pipeline] httpRequest 00:11:14.088 HttpMethod: GET 00:11:14.088 URL: http://10.211.164.101/packages/spdk_0fa934e8f41d43921e51160cbf7229a1d6eece47.tar.gz 00:11:14.090 Sending request to url: http://10.211.164.101/packages/spdk_0fa934e8f41d43921e51160cbf7229a1d6eece47.tar.gz 00:11:14.092 Response Code: HTTP/1.1 200 OK 00:11:14.092 Success: Status code 200 is in the accepted range: 200,404 00:11:14.093 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_0fa934e8f41d43921e51160cbf7229a1d6eece47.tar.gz 00:11:16.258 [Pipeline] sh 00:11:16.536 + tar --no-same-owner -xf spdk_0fa934e8f41d43921e51160cbf7229a1d6eece47.tar.gz 00:11:19.866 [Pipeline] sh 00:11:20.146 + git -C spdk log --oneline -n5 00:11:20.146 0fa934e8f raid: add callback to raid_bdev_examine_sb() 00:11:20.146 115be10bf test/raid: always create pt bdevs in rebuild test 00:11:20.146 318c184cf test/raid: remove unnecessary recreating of base bdevs 00:11:20.146 23e5871e3 raid: allow re-adding base bdev when in CONFIGURING state 00:11:20.146 1f4493e34 raid: limit the no superblock examine case 00:11:20.164 [Pipeline] writeFile 00:11:20.181 [Pipeline] sh 00:11:20.464 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:11:20.478 [Pipeline] sh 00:11:20.759 + cat autorun-spdk.conf 00:11:20.759 SPDK_RUN_FUNCTIONAL_TEST=1 00:11:20.759 SPDK_TEST_NVME=1 00:11:20.759 SPDK_TEST_FTL=1 00:11:20.759 SPDK_TEST_ISAL=1 00:11:20.759 SPDK_RUN_ASAN=1 00:11:20.759 SPDK_RUN_UBSAN=1 00:11:20.759 SPDK_TEST_XNVME=1 00:11:20.759 SPDK_TEST_NVME_FDP=1 00:11:20.759 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:20.766 RUN_NIGHTLY=0 00:11:20.770 [Pipeline] } 00:11:20.789 [Pipeline] // stage 00:11:20.804 [Pipeline] stage 00:11:20.806 [Pipeline] { (Run VM) 00:11:20.821 [Pipeline] sh 00:11:21.138 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:11:21.138 + echo 'Start stage prepare_nvme.sh' 00:11:21.138 Start stage prepare_nvme.sh 00:11:21.138 + [[ -n 9 ]] 00:11:21.138 + disk_prefix=ex9 00:11:21.138 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:11:21.138 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:11:21.138 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:11:21.138 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:21.138 ++ SPDK_TEST_NVME=1 00:11:21.138 ++ SPDK_TEST_FTL=1 00:11:21.138 ++ SPDK_TEST_ISAL=1 00:11:21.138 ++ SPDK_RUN_ASAN=1 00:11:21.138 ++ SPDK_RUN_UBSAN=1 00:11:21.138 ++ SPDK_TEST_XNVME=1 00:11:21.138 ++ SPDK_TEST_NVME_FDP=1 00:11:21.138 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:21.138 ++ RUN_NIGHTLY=0 00:11:21.138 + cd /var/jenkins/workspace/nvme-vg-autotest 00:11:21.138 + nvme_files=() 00:11:21.138 + declare -A nvme_files 00:11:21.138 + backend_dir=/var/lib/libvirt/images/backends 00:11:21.138 + nvme_files['nvme.img']=5G 00:11:21.139 + nvme_files['nvme-cmb.img']=5G 00:11:21.139 + nvme_files['nvme-multi0.img']=4G 00:11:21.139 + nvme_files['nvme-multi1.img']=4G 00:11:21.139 + nvme_files['nvme-multi2.img']=4G 00:11:21.139 + nvme_files['nvme-openstack.img']=8G 00:11:21.139 + nvme_files['nvme-zns.img']=5G 00:11:21.139 + (( SPDK_TEST_NVME_PMR == 1 )) 00:11:21.139 + (( SPDK_TEST_FTL == 1 )) 00:11:21.139 + nvme_files["nvme-ftl.img"]=6G 00:11:21.139 + (( SPDK_TEST_NVME_FDP == 1 )) 00:11:21.139 + nvme_files["nvme-fdp.img"]=1G 00:11:21.139 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:11:21.139 + for nvme in "${!nvme_files[@]}" 00:11:21.139 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:11:21.139 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:11:21.139 + for nvme in "${!nvme_files[@]}" 00:11:21.139 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-ftl.img -s 6G 00:11:21.398 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:11:21.398 + for nvme in "${!nvme_files[@]}" 00:11:21.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:11:21.398 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:11:21.398 + for nvme in "${!nvme_files[@]}" 00:11:21.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:11:21.398 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:11:21.398 + for nvme in "${!nvme_files[@]}" 00:11:21.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:11:21.398 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:11:21.398 + for nvme in "${!nvme_files[@]}" 00:11:21.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:11:21.398 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:11:21.398 + for nvme in "${!nvme_files[@]}" 00:11:21.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:11:21.398 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:11:21.398 + for nvme in "${!nvme_files[@]}" 00:11:21.398 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-fdp.img -s 1G 00:11:21.657 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:11:21.657 + for nvme in "${!nvme_files[@]}" 00:11:21.657 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:11:21.657 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:11:21.657 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:11:21.657 + echo 'End stage prepare_nvme.sh' 00:11:21.657 End stage prepare_nvme.sh 00:11:21.668 [Pipeline] sh 00:11:21.949 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:11:21.949 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex9-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:11:22.209 00:11:22.209 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:11:22.209 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:11:22.209 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:11:22.209 HELP=0 00:11:22.209 DRY_RUN=0 00:11:22.209 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,/var/lib/libvirt/images/backends/ex9-nvme-fdp.img, 00:11:22.209 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:11:22.209 NVME_AUTO_CREATE=0 00:11:22.209 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,, 00:11:22.209 NVME_CMB=,,,, 00:11:22.209 NVME_PMR=,,,, 00:11:22.209 NVME_ZNS=,,,, 00:11:22.209 NVME_MS=true,,,, 00:11:22.209 NVME_FDP=,,,on, 00:11:22.209 SPDK_VAGRANT_DISTRO=fedora38 00:11:22.209 SPDK_VAGRANT_VMCPU=10 00:11:22.209 SPDK_VAGRANT_VMRAM=12288 00:11:22.209 SPDK_VAGRANT_PROVIDER=libvirt 00:11:22.209 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:11:22.209 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:11:22.209 SPDK_OPENSTACK_NETWORK=0 00:11:22.209 VAGRANT_PACKAGE_BOX=0 00:11:22.209 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:11:22.209 FORCE_DISTRO=true 00:11:22.209 VAGRANT_BOX_VERSION= 00:11:22.209 EXTRA_VAGRANTFILES= 00:11:22.209 NIC_MODEL=e1000 00:11:22.209 00:11:22.209 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:11:22.209 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:11:26.417 Bringing machine 'default' up with 'libvirt' provider... 00:11:26.676 ==> default: Creating image (snapshot of base box volume). 00:11:26.935 ==> default: Creating domain with the following settings... 00:11:26.935 ==> default: -- Name: fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1713364234_ef37b2538a639cd12760 00:11:26.935 ==> default: -- Domain type: kvm 00:11:26.935 ==> default: -- Cpus: 10 00:11:26.935 ==> default: -- Feature: acpi 00:11:26.935 ==> default: -- Feature: apic 00:11:26.935 ==> default: -- Feature: pae 00:11:26.935 ==> default: -- Memory: 12288M 00:11:26.935 ==> default: -- Memory Backing: hugepages: 00:11:26.935 ==> default: -- Management MAC: 00:11:26.935 ==> default: -- Loader: 00:11:26.935 ==> default: -- Nvram: 00:11:26.935 ==> default: -- Base box: spdk/fedora38 00:11:26.935 ==> default: -- Storage pool: default 00:11:26.935 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1713364234_ef37b2538a639cd12760.img (20G) 00:11:26.935 ==> default: -- Volume Cache: default 00:11:26.935 ==> default: -- Kernel: 00:11:26.935 ==> default: -- Initrd: 00:11:26.935 ==> default: -- Graphics Type: vnc 00:11:26.935 ==> default: -- Graphics Port: -1 00:11:26.935 ==> default: -- Graphics IP: 127.0.0.1 00:11:26.935 ==> default: -- Graphics Password: Not defined 00:11:26.935 ==> default: -- Video Type: cirrus 00:11:26.935 ==> default: -- Video VRAM: 9216 00:11:26.935 ==> default: -- Sound Type: 00:11:26.935 ==> default: -- Keymap: en-us 00:11:26.935 ==> default: -- TPM Path: 00:11:26.935 ==> default: -- INPUT: type=mouse, bus=ps2 00:11:26.935 ==> default: -- Command line args: 00:11:26.935 ==> default: -> value=-device, 00:11:26.935 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:11:26.935 ==> default: -> value=-drive, 00:11:26.935 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:11:26.935 ==> default: -> value=-device, 00:11:26.935 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:11:26.935 ==> default: -> value=-device, 00:11:26.935 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:11:26.935 ==> default: -> value=-drive, 00:11:26.935 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-1-drive0, 00:11:26.935 ==> default: -> value=-device, 00:11:26.935 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:26.935 ==> default: -> value=-device, 00:11:26.935 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:11:26.935 ==> default: -> value=-drive, 00:11:26.935 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:11:26.935 ==> default: -> value=-device, 00:11:26.935 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:26.935 ==> default: -> value=-drive, 00:11:26.935 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:11:26.935 ==> default: -> value=-device, 00:11:26.935 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:26.935 ==> default: -> value=-drive, 00:11:26.935 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:11:26.935 ==> default: -> value=-device, 00:11:26.935 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:26.935 ==> default: -> value=-device, 00:11:26.935 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:11:26.935 ==> default: -> value=-device, 00:11:26.935 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:11:26.935 ==> default: -> value=-drive, 00:11:26.935 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:11:26.935 ==> default: -> value=-device, 00:11:26.935 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:26.935 ==> default: Creating shared folders metadata... 00:11:26.935 ==> default: Starting domain. 00:11:28.838 ==> default: Waiting for domain to get an IP address... 00:11:46.921 ==> default: Waiting for SSH to become available... 00:11:46.921 ==> default: Configuring and enabling network interfaces... 00:11:50.218 default: SSH address: 192.168.121.103:22 00:11:50.218 default: SSH username: vagrant 00:11:50.218 default: SSH auth method: private key 00:11:52.751 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:12:02.734 ==> default: Mounting SSHFS shared folder... 00:12:03.673 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:12:03.673 ==> default: Checking Mount.. 00:12:05.049 ==> default: Folder Successfully Mounted! 00:12:05.049 ==> default: Running provisioner: file... 00:12:05.617 default: ~/.gitconfig => .gitconfig 00:12:06.198 00:12:06.198 SUCCESS! 00:12:06.198 00:12:06.198 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:12:06.198 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:12:06.198 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:12:06.198 00:12:06.244 [Pipeline] } 00:12:06.263 [Pipeline] // stage 00:12:06.273 [Pipeline] dir 00:12:06.273 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:12:06.275 [Pipeline] { 00:12:06.289 [Pipeline] catchError 00:12:06.291 [Pipeline] { 00:12:06.305 [Pipeline] sh 00:12:06.583 + vagrant ssh-config --host vagrant 00:12:06.583 + sed+ -ne /^Host/,$p 00:12:06.583 tee ssh_conf 00:12:09.890 Host vagrant 00:12:09.890 HostName 192.168.121.103 00:12:09.890 User vagrant 00:12:09.890 Port 22 00:12:09.890 UserKnownHostsFile /dev/null 00:12:09.890 StrictHostKeyChecking no 00:12:09.890 PasswordAuthentication no 00:12:09.890 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1701806725-069-updated-1701632595-patched-kernel/libvirt/fedora38 00:12:09.890 IdentitiesOnly yes 00:12:09.890 LogLevel FATAL 00:12:09.890 ForwardAgent yes 00:12:09.890 ForwardX11 yes 00:12:09.890 00:12:09.924 [Pipeline] withEnv 00:12:09.926 [Pipeline] { 00:12:09.941 [Pipeline] sh 00:12:10.219 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:12:10.220 source /etc/os-release 00:12:10.220 [[ -e /image.version ]] && img=$(< /image.version) 00:12:10.220 # Minimal, systemd-like check. 00:12:10.220 if [[ -e /.dockerenv ]]; then 00:12:10.220 # Clear garbage from the node's name: 00:12:10.220 # agt-er_autotest_547-896 -> autotest_547-896 00:12:10.220 # $HOSTNAME is the actual container id 00:12:10.220 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:12:10.220 if mountpoint -q /etc/hostname; then 00:12:10.220 # We can assume this is a mount from a host where container is running, 00:12:10.220 # so fetch its hostname to easily identify the target swarm worker. 00:12:10.220 container="$(< /etc/hostname) ($agent)" 00:12:10.220 else 00:12:10.220 # Fallback 00:12:10.220 container=$agent 00:12:10.220 fi 00:12:10.220 fi 00:12:10.220 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:12:10.220 00:12:10.490 [Pipeline] } 00:12:10.512 [Pipeline] // withEnv 00:12:10.521 [Pipeline] setCustomBuildProperty 00:12:10.540 [Pipeline] stage 00:12:10.542 [Pipeline] { (Tests) 00:12:10.561 [Pipeline] sh 00:12:10.838 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:12:11.110 [Pipeline] timeout 00:12:11.110 Timeout set to expire in 40 min 00:12:11.112 [Pipeline] { 00:12:11.125 [Pipeline] sh 00:12:11.406 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:12:11.987 HEAD is now at 0fa934e8f raid: add callback to raid_bdev_examine_sb() 00:12:12.004 [Pipeline] sh 00:12:12.278 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:12:12.550 [Pipeline] sh 00:12:12.871 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:12:12.885 [Pipeline] sh 00:12:13.160 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:12:13.418 ++ readlink -f spdk_repo 00:12:13.418 + DIR_ROOT=/home/vagrant/spdk_repo 00:12:13.418 + [[ -n /home/vagrant/spdk_repo ]] 00:12:13.418 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:12:13.418 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:12:13.418 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:12:13.418 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:12:13.418 + [[ -d /home/vagrant/spdk_repo/output ]] 00:12:13.418 + cd /home/vagrant/spdk_repo 00:12:13.418 + source /etc/os-release 00:12:13.418 ++ NAME='Fedora Linux' 00:12:13.418 ++ VERSION='38 (Cloud Edition)' 00:12:13.418 ++ ID=fedora 00:12:13.418 ++ VERSION_ID=38 00:12:13.418 ++ VERSION_CODENAME= 00:12:13.418 ++ PLATFORM_ID=platform:f38 00:12:13.418 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:12:13.418 ++ ANSI_COLOR='0;38;2;60;110;180' 00:12:13.418 ++ LOGO=fedora-logo-icon 00:12:13.418 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:12:13.418 ++ HOME_URL=https://fedoraproject.org/ 00:12:13.418 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:12:13.418 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:12:13.418 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:12:13.418 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:12:13.418 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:12:13.418 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:12:13.418 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:12:13.418 ++ SUPPORT_END=2024-05-14 00:12:13.418 ++ VARIANT='Cloud Edition' 00:12:13.418 ++ VARIANT_ID=cloud 00:12:13.418 + uname -a 00:12:13.418 Linux fedora38-cloud-1701806725-069-updated-1701632595 6.5.12-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Dec 3 20:08:38 UTC 2023 x86_64 GNU/Linux 00:12:13.418 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:13.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:14.242 lsblk: /dev/nvme3c3n1: not a block device 00:12:14.242 Hugepages 00:12:14.242 node hugesize free / total 00:12:14.242 node0 1048576kB 0 / 0 00:12:14.242 node0 2048kB 0 / 0 00:12:14.242 00:12:14.242 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:14.242 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:14.242 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:12:14.242 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:12:14.242 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:12:14.499 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3c3n1 00:12:14.499 + rm -f /tmp/spdk-ld-path 00:12:14.499 + source autorun-spdk.conf 00:12:14.499 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:14.499 ++ SPDK_TEST_NVME=1 00:12:14.499 ++ SPDK_TEST_FTL=1 00:12:14.499 ++ SPDK_TEST_ISAL=1 00:12:14.499 ++ SPDK_RUN_ASAN=1 00:12:14.499 ++ SPDK_RUN_UBSAN=1 00:12:14.499 ++ SPDK_TEST_XNVME=1 00:12:14.499 ++ SPDK_TEST_NVME_FDP=1 00:12:14.499 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:14.499 ++ RUN_NIGHTLY=0 00:12:14.499 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:12:14.499 + [[ -n '' ]] 00:12:14.499 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:12:14.499 + for M in /var/spdk/build-*-manifest.txt 00:12:14.499 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:12:14.499 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:14.499 + for M in /var/spdk/build-*-manifest.txt 00:12:14.499 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:12:14.499 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:14.499 + for M in /var/spdk/build-*-manifest.txt 00:12:14.499 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:12:14.499 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:14.499 ++ uname 00:12:14.499 + [[ Linux == \L\i\n\u\x ]] 00:12:14.499 + sudo dmesg -T 00:12:14.499 + sudo dmesg --clear 00:12:14.758 + dmesg_pid=5129 00:12:14.758 + sudo dmesg -Tw 00:12:14.758 + [[ Fedora Linux == FreeBSD ]] 00:12:14.758 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:14.758 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:14.758 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:12:14.758 + [[ -x /usr/src/fio-static/fio ]] 00:12:14.758 + export FIO_BIN=/usr/src/fio-static/fio 00:12:14.758 + FIO_BIN=/usr/src/fio-static/fio 00:12:14.758 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:12:14.758 + [[ ! -v VFIO_QEMU_BIN ]] 00:12:14.758 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:12:14.758 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:14.758 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:14.758 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:12:14.758 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:14.758 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:14.758 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:12:14.758 Test configuration: 00:12:14.758 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:14.758 SPDK_TEST_NVME=1 00:12:14.758 SPDK_TEST_FTL=1 00:12:14.758 SPDK_TEST_ISAL=1 00:12:14.758 SPDK_RUN_ASAN=1 00:12:14.758 SPDK_RUN_UBSAN=1 00:12:14.758 SPDK_TEST_XNVME=1 00:12:14.758 SPDK_TEST_NVME_FDP=1 00:12:14.758 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:14.758 RUN_NIGHTLY=0 14:31:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.758 14:31:23 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:12:14.758 14:31:23 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.758 14:31:23 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.758 14:31:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.758 14:31:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.758 14:31:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.758 14:31:23 -- paths/export.sh@5 -- $ export PATH 00:12:14.758 14:31:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.758 14:31:23 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:12:14.758 14:31:23 -- common/autobuild_common.sh@435 -- $ date +%s 00:12:14.758 14:31:23 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713364283.XXXXXX 00:12:14.758 14:31:23 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713364283.P9ESLq 00:12:14.758 14:31:23 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:12:14.758 14:31:23 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:12:14.758 14:31:23 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:12:14.758 14:31:23 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:12:14.758 14:31:23 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:12:14.758 14:31:23 -- common/autobuild_common.sh@451 -- $ get_config_params 00:12:14.758 14:31:23 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:12:14.758 14:31:23 -- common/autotest_common.sh@10 -- $ set +x 00:12:14.758 14:31:23 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:12:14.758 14:31:23 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:12:14.758 14:31:23 -- pm/common@17 -- $ local monitor 00:12:14.758 14:31:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:14.758 14:31:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5163 00:12:14.758 14:31:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:14.758 14:31:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5165 00:12:14.758 14:31:23 -- pm/common@26 -- $ sleep 1 00:12:14.758 14:31:23 -- pm/common@21 -- $ date +%s 00:12:14.758 14:31:23 -- pm/common@21 -- $ date +%s 00:12:14.758 14:31:23 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713364283 00:12:14.758 14:31:23 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713364283 00:12:14.758 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713364283_collect-vmstat.pm.log 00:12:14.758 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713364283_collect-cpu-load.pm.log 00:12:15.693 14:31:24 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:12:15.693 14:31:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:12:15.693 14:31:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:12:15.693 14:31:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:12:15.693 14:31:24 -- spdk/autobuild.sh@16 -- $ date -u 00:12:15.693 Wed Apr 17 02:31:24 PM UTC 2024 00:12:15.693 14:31:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:12:15.693 v24.05-pre-392-g0fa934e8f 00:12:15.693 14:31:24 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:12:15.693 14:31:24 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:12:15.693 14:31:24 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:12:15.693 14:31:24 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:12:15.693 14:31:24 -- common/autotest_common.sh@10 -- $ set +x 00:12:15.953 ************************************ 00:12:15.953 START TEST asan 00:12:15.953 ************************************ 00:12:15.953 using asan 00:12:15.953 14:31:24 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:12:15.953 00:12:15.953 real 0m0.000s 00:12:15.953 user 0m0.000s 00:12:15.953 sys 0m0.000s 00:12:15.953 14:31:24 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:12:15.953 ************************************ 00:12:15.953 END TEST asan 00:12:15.953 14:31:24 -- common/autotest_common.sh@10 -- $ set +x 00:12:15.953 ************************************ 00:12:15.953 14:31:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:12:15.953 14:31:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:12:15.953 14:31:24 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:12:15.953 14:31:24 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:12:15.953 14:31:24 -- common/autotest_common.sh@10 -- $ set +x 00:12:15.953 ************************************ 00:12:15.953 START TEST ubsan 00:12:15.953 ************************************ 00:12:15.953 using ubsan 00:12:15.953 14:31:24 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:12:15.953 00:12:15.953 real 0m0.000s 00:12:15.953 user 0m0.000s 00:12:15.953 sys 0m0.000s 00:12:15.953 14:31:24 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:12:15.953 ************************************ 00:12:15.953 14:31:24 -- common/autotest_common.sh@10 -- $ set +x 00:12:15.953 END TEST ubsan 00:12:15.953 ************************************ 00:12:15.953 14:31:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:12:15.953 14:31:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:12:15.953 14:31:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:12:15.953 14:31:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:12:15.953 14:31:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:12:15.953 14:31:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:12:15.953 14:31:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:12:15.953 14:31:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:12:15.953 14:31:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:12:16.211 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:16.211 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:16.802 Using 'verbs' RDMA provider 00:12:32.659 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:12:47.601 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:12:47.601 Creating mk/config.mk...done. 00:12:47.601 Creating mk/cc.flags.mk...done. 00:12:47.601 Type 'make' to build. 00:12:47.601 14:31:54 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:12:47.601 14:31:54 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:12:47.601 14:31:54 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:12:47.601 14:31:54 -- common/autotest_common.sh@10 -- $ set +x 00:12:47.601 ************************************ 00:12:47.601 START TEST make 00:12:47.601 ************************************ 00:12:47.601 14:31:54 -- common/autotest_common.sh@1111 -- $ make -j10 00:12:47.601 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:12:47.601 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:12:47.601 meson setup builddir \ 00:12:47.601 -Dwith-libaio=enabled \ 00:12:47.601 -Dwith-liburing=enabled \ 00:12:47.601 -Dwith-libvfn=disabled \ 00:12:47.601 -Dwith-spdk=false && \ 00:12:47.601 meson compile -C builddir && \ 00:12:47.601 cd -) 00:12:47.601 make[1]: Nothing to be done for 'all'. 00:12:50.134 The Meson build system 00:12:50.134 Version: 1.3.0 00:12:50.134 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:12:50.134 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:12:50.134 Build type: native build 00:12:50.134 Project name: xnvme 00:12:50.134 Project version: 0.7.3 00:12:50.134 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:12:50.134 C linker for the host machine: cc ld.bfd 2.39-16 00:12:50.134 Host machine cpu family: x86_64 00:12:50.134 Host machine cpu: x86_64 00:12:50.134 Message: host_machine.system: linux 00:12:50.134 Compiler for C supports arguments -Wno-missing-braces: YES 00:12:50.134 Compiler for C supports arguments -Wno-cast-function-type: YES 00:12:50.134 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:12:50.134 Run-time dependency threads found: YES 00:12:50.134 Has header "setupapi.h" : NO 00:12:50.135 Has header "linux/blkzoned.h" : YES 00:12:50.135 Has header "linux/blkzoned.h" : YES (cached) 00:12:50.135 Has header "libaio.h" : YES 00:12:50.135 Library aio found: YES 00:12:50.135 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:12:50.135 Run-time dependency liburing found: YES 2.2 00:12:50.135 Dependency libvfn skipped: feature with-libvfn disabled 00:12:50.135 Run-time dependency appleframeworks found: NO (tried framework) 00:12:50.135 Run-time dependency appleframeworks found: NO (tried framework) 00:12:50.135 Configuring xnvme_config.h using configuration 00:12:50.135 Configuring xnvme.spec using configuration 00:12:50.135 Run-time dependency bash-completion found: YES 2.11 00:12:50.135 Message: Bash-completions: /usr/share/bash-completion/completions 00:12:50.135 Program cp found: YES (/usr/bin/cp) 00:12:50.135 Has header "winsock2.h" : NO 00:12:50.135 Has header "dbghelp.h" : NO 00:12:50.135 Library rpcrt4 found: NO 00:12:50.135 Library rt found: YES 00:12:50.135 Checking for function "clock_gettime" with dependency -lrt: YES 00:12:50.135 Found CMake: /usr/bin/cmake (3.27.7) 00:12:50.135 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:12:50.135 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:12:50.135 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:12:50.135 Build targets in project: 32 00:12:50.135 00:12:50.135 xnvme 0.7.3 00:12:50.135 00:12:50.135 User defined options 00:12:50.135 with-libaio : enabled 00:12:50.135 with-liburing: enabled 00:12:50.135 with-libvfn : disabled 00:12:50.135 with-spdk : false 00:12:50.135 00:12:50.135 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:12:50.135 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:12:50.135 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:12:50.135 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:12:50.135 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:12:50.135 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:12:50.135 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:12:50.135 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:12:50.393 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:12:50.393 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:12:50.393 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:12:50.393 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:12:50.394 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:12:50.394 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:12:50.394 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:12:50.394 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:12:50.394 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:12:50.394 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:12:50.394 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:12:50.394 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:12:50.394 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:12:50.394 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:12:50.394 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:12:50.394 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:12:50.394 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:12:50.394 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:12:50.394 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:12:50.394 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:12:50.394 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:12:50.652 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:12:50.652 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:12:50.652 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:12:50.652 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:12:50.652 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:12:50.652 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:12:50.652 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:12:50.652 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:12:50.652 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:12:50.652 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:12:50.652 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:12:50.652 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:12:50.652 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:12:50.652 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:12:50.652 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:12:50.652 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:12:50.652 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:12:50.652 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:12:50.652 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:12:50.652 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:12:50.652 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:12:50.652 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:12:50.652 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:12:50.652 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:12:50.652 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:12:50.652 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:12:50.652 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:12:50.652 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:12:50.652 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:12:50.910 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:12:50.910 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:12:50.910 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:12:50.910 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:12:50.910 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:12:50.910 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:12:50.910 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:12:50.910 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:12:50.910 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:12:50.910 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:12:50.910 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:12:50.910 [68/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:12:50.910 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:12:50.910 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:12:50.910 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:12:51.168 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:12:51.168 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:12:51.168 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:12:51.168 [75/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:12:51.168 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:12:51.168 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:12:51.168 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:12:51.168 [79/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:12:51.168 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:12:51.168 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:12:51.168 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:12:51.168 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:12:51.168 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:12:51.168 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:12:51.168 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:12:51.168 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:12:51.168 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:12:51.168 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:12:51.168 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:12:51.168 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:12:51.426 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:12:51.426 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:12:51.426 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:12:51.426 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:12:51.426 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:12:51.426 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:12:51.426 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:12:51.426 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:12:51.426 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:12:51.426 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:12:51.426 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:12:51.426 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:12:51.426 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:12:51.426 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:12:51.426 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:12:51.426 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:12:51.426 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:12:51.426 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:12:51.426 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:12:51.426 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:12:51.426 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:12:51.426 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:12:51.426 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:12:51.426 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:12:51.426 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:12:51.426 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:12:51.426 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:12:51.426 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:12:51.426 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:12:51.426 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:12:51.426 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:12:51.684 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:12:51.684 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:12:51.684 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:12:51.684 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:12:51.684 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:12:51.684 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:12:51.684 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:12:51.684 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:12:51.684 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:12:51.684 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:12:51.684 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:12:51.684 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:12:51.684 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:12:51.684 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:12:51.684 [137/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:12:51.684 [138/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:12:51.684 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:12:51.684 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:12:51.942 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:12:51.942 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:12:51.942 [143/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:12:51.942 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:12:51.942 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:12:51.942 [146/203] Linking target lib/libxnvme.so 00:12:51.942 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:12:51.942 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:12:51.942 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:12:51.942 [150/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:12:51.942 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:12:51.942 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:12:51.942 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:12:51.942 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:12:51.942 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:12:51.942 [156/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:12:52.200 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:12:52.200 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:12:52.200 [159/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:12:52.200 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:12:52.200 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:12:52.200 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:12:52.200 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:12:52.200 [164/203] Compiling C object tools/zoned.p/zoned.c.o 00:12:52.200 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:12:52.200 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:12:52.200 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:12:52.200 [168/203] Compiling C object tools/kvs.p/kvs.c.o 00:12:52.200 [169/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:12:52.200 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:12:52.458 [171/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:12:52.458 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:12:52.458 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:12:52.458 [174/203] Linking static target lib/libxnvme.a 00:12:52.458 [175/203] Linking target tests/xnvme_tests_buf 00:12:52.458 [176/203] Linking target tests/xnvme_tests_enum 00:12:52.458 [177/203] Linking target tests/xnvme_tests_cli 00:12:52.458 [178/203] Linking target tests/xnvme_tests_async_intf 00:12:52.458 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:12:52.458 [180/203] Linking target tests/xnvme_tests_lblk 00:12:52.458 [181/203] Linking target tests/xnvme_tests_znd_explicit_open 00:12:52.458 [182/203] Linking target tests/xnvme_tests_znd_append 00:12:52.458 [183/203] Linking target tests/xnvme_tests_scc 00:12:52.458 [184/203] Linking target tests/xnvme_tests_ioworker 00:12:52.458 [185/203] Linking target tests/xnvme_tests_map 00:12:52.458 [186/203] Linking target tools/lblk 00:12:52.458 [187/203] Linking target tests/xnvme_tests_xnvme_cli 00:12:52.458 [188/203] Linking target tests/xnvme_tests_kvs 00:12:52.458 [189/203] Linking target tests/xnvme_tests_znd_state 00:12:52.458 [190/203] Linking target tests/xnvme_tests_znd_zrwa 00:12:52.458 [191/203] Linking target tools/xdd 00:12:52.458 [192/203] Linking target tools/xnvme 00:12:52.458 [193/203] Linking target tools/zoned 00:12:52.458 [194/203] Linking target tools/xnvme_file 00:12:52.458 [195/203] Linking target examples/xnvme_enum 00:12:52.458 [196/203] Linking target tools/kvs 00:12:52.458 [197/203] Linking target examples/xnvme_io_async 00:12:52.458 [198/203] Linking target examples/xnvme_single_async 00:12:52.717 [199/203] Linking target examples/xnvme_dev 00:12:52.717 [200/203] Linking target examples/xnvme_single_sync 00:12:52.717 [201/203] Linking target examples/zoned_io_async 00:12:52.717 [202/203] Linking target examples/xnvme_hello 00:12:52.717 [203/203] Linking target examples/zoned_io_sync 00:12:52.717 INFO: autodetecting backend as ninja 00:12:52.717 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:12:52.717 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:13:00.842 The Meson build system 00:13:00.842 Version: 1.3.0 00:13:00.842 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:13:00.842 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:13:00.842 Build type: native build 00:13:00.842 Program cat found: YES (/usr/bin/cat) 00:13:00.842 Project name: DPDK 00:13:00.842 Project version: 23.11.0 00:13:00.842 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:13:00.842 C linker for the host machine: cc ld.bfd 2.39-16 00:13:00.842 Host machine cpu family: x86_64 00:13:00.842 Host machine cpu: x86_64 00:13:00.842 Message: ## Building in Developer Mode ## 00:13:00.842 Program pkg-config found: YES (/usr/bin/pkg-config) 00:13:00.842 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:13:00.842 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:13:00.842 Program python3 found: YES (/usr/bin/python3) 00:13:00.842 Program cat found: YES (/usr/bin/cat) 00:13:00.842 Compiler for C supports arguments -march=native: YES 00:13:00.842 Checking for size of "void *" : 8 00:13:00.842 Checking for size of "void *" : 8 (cached) 00:13:00.842 Library m found: YES 00:13:00.842 Library numa found: YES 00:13:00.842 Has header "numaif.h" : YES 00:13:00.842 Library fdt found: NO 00:13:00.842 Library execinfo found: NO 00:13:00.842 Has header "execinfo.h" : YES 00:13:00.842 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:13:00.842 Run-time dependency libarchive found: NO (tried pkgconfig) 00:13:00.842 Run-time dependency libbsd found: NO (tried pkgconfig) 00:13:00.842 Run-time dependency jansson found: NO (tried pkgconfig) 00:13:00.842 Run-time dependency openssl found: YES 3.0.9 00:13:00.842 Run-time dependency libpcap found: YES 1.10.4 00:13:00.842 Has header "pcap.h" with dependency libpcap: YES 00:13:00.842 Compiler for C supports arguments -Wcast-qual: YES 00:13:00.842 Compiler for C supports arguments -Wdeprecated: YES 00:13:00.842 Compiler for C supports arguments -Wformat: YES 00:13:00.842 Compiler for C supports arguments -Wformat-nonliteral: NO 00:13:00.842 Compiler for C supports arguments -Wformat-security: NO 00:13:00.842 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:00.842 Compiler for C supports arguments -Wmissing-prototypes: YES 00:13:00.842 Compiler for C supports arguments -Wnested-externs: YES 00:13:00.842 Compiler for C supports arguments -Wold-style-definition: YES 00:13:00.842 Compiler for C supports arguments -Wpointer-arith: YES 00:13:00.842 Compiler for C supports arguments -Wsign-compare: YES 00:13:00.842 Compiler for C supports arguments -Wstrict-prototypes: YES 00:13:00.842 Compiler for C supports arguments -Wundef: YES 00:13:00.842 Compiler for C supports arguments -Wwrite-strings: YES 00:13:00.842 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:13:00.842 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:13:00.842 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:00.842 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:13:00.842 Program objdump found: YES (/usr/bin/objdump) 00:13:00.842 Compiler for C supports arguments -mavx512f: YES 00:13:00.842 Checking if "AVX512 checking" compiles: YES 00:13:00.842 Fetching value of define "__SSE4_2__" : 1 00:13:00.842 Fetching value of define "__AES__" : 1 00:13:00.842 Fetching value of define "__AVX__" : 1 00:13:00.842 Fetching value of define "__AVX2__" : 1 00:13:00.842 Fetching value of define "__AVX512BW__" : 1 00:13:00.842 Fetching value of define "__AVX512CD__" : 1 00:13:00.842 Fetching value of define "__AVX512DQ__" : 1 00:13:00.842 Fetching value of define "__AVX512F__" : 1 00:13:00.842 Fetching value of define "__AVX512VL__" : 1 00:13:00.842 Fetching value of define "__PCLMUL__" : 1 00:13:00.842 Fetching value of define "__RDRND__" : 1 00:13:00.842 Fetching value of define "__RDSEED__" : 1 00:13:00.842 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:13:00.842 Fetching value of define "__znver1__" : (undefined) 00:13:00.842 Fetching value of define "__znver2__" : (undefined) 00:13:00.842 Fetching value of define "__znver3__" : (undefined) 00:13:00.842 Fetching value of define "__znver4__" : (undefined) 00:13:00.842 Library asan found: YES 00:13:00.842 Compiler for C supports arguments -Wno-format-truncation: YES 00:13:00.842 Message: lib/log: Defining dependency "log" 00:13:00.842 Message: lib/kvargs: Defining dependency "kvargs" 00:13:00.842 Message: lib/telemetry: Defining dependency "telemetry" 00:13:00.842 Library rt found: YES 00:13:00.843 Checking for function "getentropy" : NO 00:13:00.843 Message: lib/eal: Defining dependency "eal" 00:13:00.843 Message: lib/ring: Defining dependency "ring" 00:13:00.843 Message: lib/rcu: Defining dependency "rcu" 00:13:00.843 Message: lib/mempool: Defining dependency "mempool" 00:13:00.843 Message: lib/mbuf: Defining dependency "mbuf" 00:13:00.843 Fetching value of define "__PCLMUL__" : 1 (cached) 00:13:00.843 Fetching value of define "__AVX512F__" : 1 (cached) 00:13:00.843 Fetching value of define "__AVX512BW__" : 1 (cached) 00:13:00.843 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:13:00.843 Fetching value of define "__AVX512VL__" : 1 (cached) 00:13:00.843 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:13:00.843 Compiler for C supports arguments -mpclmul: YES 00:13:00.843 Compiler for C supports arguments -maes: YES 00:13:00.843 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:00.843 Compiler for C supports arguments -mavx512bw: YES 00:13:00.843 Compiler for C supports arguments -mavx512dq: YES 00:13:00.843 Compiler for C supports arguments -mavx512vl: YES 00:13:00.843 Compiler for C supports arguments -mvpclmulqdq: YES 00:13:00.843 Compiler for C supports arguments -mavx2: YES 00:13:00.843 Compiler for C supports arguments -mavx: YES 00:13:00.843 Message: lib/net: Defining dependency "net" 00:13:00.843 Message: lib/meter: Defining dependency "meter" 00:13:00.843 Message: lib/ethdev: Defining dependency "ethdev" 00:13:00.843 Message: lib/pci: Defining dependency "pci" 00:13:00.843 Message: lib/cmdline: Defining dependency "cmdline" 00:13:00.843 Message: lib/hash: Defining dependency "hash" 00:13:00.843 Message: lib/timer: Defining dependency "timer" 00:13:00.843 Message: lib/compressdev: Defining dependency "compressdev" 00:13:00.843 Message: lib/cryptodev: Defining dependency "cryptodev" 00:13:00.843 Message: lib/dmadev: Defining dependency "dmadev" 00:13:00.843 Compiler for C supports arguments -Wno-cast-qual: YES 00:13:00.843 Message: lib/power: Defining dependency "power" 00:13:00.843 Message: lib/reorder: Defining dependency "reorder" 00:13:00.843 Message: lib/security: Defining dependency "security" 00:13:00.843 Has header "linux/userfaultfd.h" : YES 00:13:00.843 Has header "linux/vduse.h" : YES 00:13:00.843 Message: lib/vhost: Defining dependency "vhost" 00:13:00.843 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:13:00.843 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:13:00.843 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:13:00.843 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:13:00.843 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:13:00.843 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:13:00.843 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:13:00.843 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:13:00.843 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:13:00.843 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:13:00.843 Program doxygen found: YES (/usr/bin/doxygen) 00:13:00.843 Configuring doxy-api-html.conf using configuration 00:13:00.843 Configuring doxy-api-man.conf using configuration 00:13:00.843 Program mandb found: YES (/usr/bin/mandb) 00:13:00.843 Program sphinx-build found: NO 00:13:00.843 Configuring rte_build_config.h using configuration 00:13:00.843 Message: 00:13:00.843 ================= 00:13:00.843 Applications Enabled 00:13:00.843 ================= 00:13:00.843 00:13:00.843 apps: 00:13:00.843 00:13:00.843 00:13:00.843 Message: 00:13:00.843 ================= 00:13:00.843 Libraries Enabled 00:13:00.843 ================= 00:13:00.843 00:13:00.843 libs: 00:13:00.843 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:13:00.843 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:13:00.843 cryptodev, dmadev, power, reorder, security, vhost, 00:13:00.843 00:13:00.843 Message: 00:13:00.843 =============== 00:13:00.843 Drivers Enabled 00:13:00.843 =============== 00:13:00.843 00:13:00.843 common: 00:13:00.843 00:13:00.843 bus: 00:13:00.843 pci, vdev, 00:13:00.843 mempool: 00:13:00.843 ring, 00:13:00.843 dma: 00:13:00.843 00:13:00.843 net: 00:13:00.843 00:13:00.843 crypto: 00:13:00.843 00:13:00.843 compress: 00:13:00.843 00:13:00.843 vdpa: 00:13:00.843 00:13:00.843 00:13:00.843 Message: 00:13:00.843 ================= 00:13:00.843 Content Skipped 00:13:00.843 ================= 00:13:00.843 00:13:00.843 apps: 00:13:00.843 dumpcap: explicitly disabled via build config 00:13:00.843 graph: explicitly disabled via build config 00:13:00.843 pdump: explicitly disabled via build config 00:13:00.843 proc-info: explicitly disabled via build config 00:13:00.843 test-acl: explicitly disabled via build config 00:13:00.843 test-bbdev: explicitly disabled via build config 00:13:00.843 test-cmdline: explicitly disabled via build config 00:13:00.843 test-compress-perf: explicitly disabled via build config 00:13:00.843 test-crypto-perf: explicitly disabled via build config 00:13:00.843 test-dma-perf: explicitly disabled via build config 00:13:00.843 test-eventdev: explicitly disabled via build config 00:13:00.843 test-fib: explicitly disabled via build config 00:13:00.843 test-flow-perf: explicitly disabled via build config 00:13:00.843 test-gpudev: explicitly disabled via build config 00:13:00.843 test-mldev: explicitly disabled via build config 00:13:00.843 test-pipeline: explicitly disabled via build config 00:13:00.843 test-pmd: explicitly disabled via build config 00:13:00.843 test-regex: explicitly disabled via build config 00:13:00.843 test-sad: explicitly disabled via build config 00:13:00.843 test-security-perf: explicitly disabled via build config 00:13:00.843 00:13:00.843 libs: 00:13:00.843 metrics: explicitly disabled via build config 00:13:00.843 acl: explicitly disabled via build config 00:13:00.843 bbdev: explicitly disabled via build config 00:13:00.843 bitratestats: explicitly disabled via build config 00:13:00.843 bpf: explicitly disabled via build config 00:13:00.843 cfgfile: explicitly disabled via build config 00:13:00.843 distributor: explicitly disabled via build config 00:13:00.843 efd: explicitly disabled via build config 00:13:00.843 eventdev: explicitly disabled via build config 00:13:00.843 dispatcher: explicitly disabled via build config 00:13:00.843 gpudev: explicitly disabled via build config 00:13:00.843 gro: explicitly disabled via build config 00:13:00.843 gso: explicitly disabled via build config 00:13:00.843 ip_frag: explicitly disabled via build config 00:13:00.843 jobstats: explicitly disabled via build config 00:13:00.843 latencystats: explicitly disabled via build config 00:13:00.843 lpm: explicitly disabled via build config 00:13:00.843 member: explicitly disabled via build config 00:13:00.843 pcapng: explicitly disabled via build config 00:13:00.843 rawdev: explicitly disabled via build config 00:13:00.843 regexdev: explicitly disabled via build config 00:13:00.843 mldev: explicitly disabled via build config 00:13:00.843 rib: explicitly disabled via build config 00:13:00.843 sched: explicitly disabled via build config 00:13:00.843 stack: explicitly disabled via build config 00:13:00.843 ipsec: explicitly disabled via build config 00:13:00.843 pdcp: explicitly disabled via build config 00:13:00.843 fib: explicitly disabled via build config 00:13:00.843 port: explicitly disabled via build config 00:13:00.843 pdump: explicitly disabled via build config 00:13:00.843 table: explicitly disabled via build config 00:13:00.843 pipeline: explicitly disabled via build config 00:13:00.843 graph: explicitly disabled via build config 00:13:00.843 node: explicitly disabled via build config 00:13:00.843 00:13:00.843 drivers: 00:13:00.843 common/cpt: not in enabled drivers build config 00:13:00.843 common/dpaax: not in enabled drivers build config 00:13:00.843 common/iavf: not in enabled drivers build config 00:13:00.843 common/idpf: not in enabled drivers build config 00:13:00.843 common/mvep: not in enabled drivers build config 00:13:00.843 common/octeontx: not in enabled drivers build config 00:13:00.843 bus/auxiliary: not in enabled drivers build config 00:13:00.843 bus/cdx: not in enabled drivers build config 00:13:00.843 bus/dpaa: not in enabled drivers build config 00:13:00.843 bus/fslmc: not in enabled drivers build config 00:13:00.843 bus/ifpga: not in enabled drivers build config 00:13:00.843 bus/platform: not in enabled drivers build config 00:13:00.843 bus/vmbus: not in enabled drivers build config 00:13:00.843 common/cnxk: not in enabled drivers build config 00:13:00.843 common/mlx5: not in enabled drivers build config 00:13:00.844 common/nfp: not in enabled drivers build config 00:13:00.844 common/qat: not in enabled drivers build config 00:13:00.844 common/sfc_efx: not in enabled drivers build config 00:13:00.844 mempool/bucket: not in enabled drivers build config 00:13:00.844 mempool/cnxk: not in enabled drivers build config 00:13:00.844 mempool/dpaa: not in enabled drivers build config 00:13:00.844 mempool/dpaa2: not in enabled drivers build config 00:13:00.844 mempool/octeontx: not in enabled drivers build config 00:13:00.844 mempool/stack: not in enabled drivers build config 00:13:00.844 dma/cnxk: not in enabled drivers build config 00:13:00.844 dma/dpaa: not in enabled drivers build config 00:13:00.844 dma/dpaa2: not in enabled drivers build config 00:13:00.844 dma/hisilicon: not in enabled drivers build config 00:13:00.844 dma/idxd: not in enabled drivers build config 00:13:00.844 dma/ioat: not in enabled drivers build config 00:13:00.844 dma/skeleton: not in enabled drivers build config 00:13:00.844 net/af_packet: not in enabled drivers build config 00:13:00.844 net/af_xdp: not in enabled drivers build config 00:13:00.844 net/ark: not in enabled drivers build config 00:13:00.844 net/atlantic: not in enabled drivers build config 00:13:00.844 net/avp: not in enabled drivers build config 00:13:00.844 net/axgbe: not in enabled drivers build config 00:13:00.844 net/bnx2x: not in enabled drivers build config 00:13:00.844 net/bnxt: not in enabled drivers build config 00:13:00.844 net/bonding: not in enabled drivers build config 00:13:00.844 net/cnxk: not in enabled drivers build config 00:13:00.844 net/cpfl: not in enabled drivers build config 00:13:00.844 net/cxgbe: not in enabled drivers build config 00:13:00.844 net/dpaa: not in enabled drivers build config 00:13:00.844 net/dpaa2: not in enabled drivers build config 00:13:00.844 net/e1000: not in enabled drivers build config 00:13:00.844 net/ena: not in enabled drivers build config 00:13:00.844 net/enetc: not in enabled drivers build config 00:13:00.844 net/enetfec: not in enabled drivers build config 00:13:00.844 net/enic: not in enabled drivers build config 00:13:00.844 net/failsafe: not in enabled drivers build config 00:13:00.844 net/fm10k: not in enabled drivers build config 00:13:00.844 net/gve: not in enabled drivers build config 00:13:00.844 net/hinic: not in enabled drivers build config 00:13:00.844 net/hns3: not in enabled drivers build config 00:13:00.844 net/i40e: not in enabled drivers build config 00:13:00.844 net/iavf: not in enabled drivers build config 00:13:00.844 net/ice: not in enabled drivers build config 00:13:00.844 net/idpf: not in enabled drivers build config 00:13:00.844 net/igc: not in enabled drivers build config 00:13:00.844 net/ionic: not in enabled drivers build config 00:13:00.844 net/ipn3ke: not in enabled drivers build config 00:13:00.844 net/ixgbe: not in enabled drivers build config 00:13:00.844 net/mana: not in enabled drivers build config 00:13:00.844 net/memif: not in enabled drivers build config 00:13:00.844 net/mlx4: not in enabled drivers build config 00:13:00.844 net/mlx5: not in enabled drivers build config 00:13:00.844 net/mvneta: not in enabled drivers build config 00:13:00.844 net/mvpp2: not in enabled drivers build config 00:13:00.844 net/netvsc: not in enabled drivers build config 00:13:00.844 net/nfb: not in enabled drivers build config 00:13:00.844 net/nfp: not in enabled drivers build config 00:13:00.844 net/ngbe: not in enabled drivers build config 00:13:00.844 net/null: not in enabled drivers build config 00:13:00.844 net/octeontx: not in enabled drivers build config 00:13:00.844 net/octeon_ep: not in enabled drivers build config 00:13:00.844 net/pcap: not in enabled drivers build config 00:13:00.844 net/pfe: not in enabled drivers build config 00:13:00.844 net/qede: not in enabled drivers build config 00:13:00.844 net/ring: not in enabled drivers build config 00:13:00.844 net/sfc: not in enabled drivers build config 00:13:00.844 net/softnic: not in enabled drivers build config 00:13:00.844 net/tap: not in enabled drivers build config 00:13:00.869 net/thunderx: not in enabled drivers build config 00:13:00.869 net/txgbe: not in enabled drivers build config 00:13:00.869 net/vdev_netvsc: not in enabled drivers build config 00:13:00.869 net/vhost: not in enabled drivers build config 00:13:00.869 net/virtio: not in enabled drivers build config 00:13:00.869 net/vmxnet3: not in enabled drivers build config 00:13:00.869 raw/*: missing internal dependency, "rawdev" 00:13:00.869 crypto/armv8: not in enabled drivers build config 00:13:00.869 crypto/bcmfs: not in enabled drivers build config 00:13:00.869 crypto/caam_jr: not in enabled drivers build config 00:13:00.869 crypto/ccp: not in enabled drivers build config 00:13:00.869 crypto/cnxk: not in enabled drivers build config 00:13:00.869 crypto/dpaa_sec: not in enabled drivers build config 00:13:00.869 crypto/dpaa2_sec: not in enabled drivers build config 00:13:00.869 crypto/ipsec_mb: not in enabled drivers build config 00:13:00.869 crypto/mlx5: not in enabled drivers build config 00:13:00.869 crypto/mvsam: not in enabled drivers build config 00:13:00.869 crypto/nitrox: not in enabled drivers build config 00:13:00.869 crypto/null: not in enabled drivers build config 00:13:00.869 crypto/octeontx: not in enabled drivers build config 00:13:00.869 crypto/openssl: not in enabled drivers build config 00:13:00.869 crypto/scheduler: not in enabled drivers build config 00:13:00.869 crypto/uadk: not in enabled drivers build config 00:13:00.869 crypto/virtio: not in enabled drivers build config 00:13:00.869 compress/isal: not in enabled drivers build config 00:13:00.869 compress/mlx5: not in enabled drivers build config 00:13:00.869 compress/octeontx: not in enabled drivers build config 00:13:00.869 compress/zlib: not in enabled drivers build config 00:13:00.869 regex/*: missing internal dependency, "regexdev" 00:13:00.869 ml/*: missing internal dependency, "mldev" 00:13:00.869 vdpa/ifc: not in enabled drivers build config 00:13:00.869 vdpa/mlx5: not in enabled drivers build config 00:13:00.869 vdpa/nfp: not in enabled drivers build config 00:13:00.869 vdpa/sfc: not in enabled drivers build config 00:13:00.869 event/*: missing internal dependency, "eventdev" 00:13:00.869 baseband/*: missing internal dependency, "bbdev" 00:13:00.869 gpu/*: missing internal dependency, "gpudev" 00:13:00.869 00:13:00.869 00:13:00.869 Build targets in project: 85 00:13:00.869 00:13:00.869 DPDK 23.11.0 00:13:00.869 00:13:00.869 User defined options 00:13:00.869 buildtype : debug 00:13:00.869 default_library : shared 00:13:00.869 libdir : lib 00:13:00.869 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:00.869 b_sanitize : address 00:13:00.869 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:13:00.869 c_link_args : 00:13:00.869 cpu_instruction_set: native 00:13:00.869 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:13:00.869 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:13:00.869 enable_docs : false 00:13:00.869 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:13:00.869 enable_kmods : false 00:13:00.869 tests : false 00:13:00.869 00:13:00.869 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:00.869 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:13:00.869 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:13:00.869 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:13:00.869 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:13:00.869 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:13:00.869 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:13:00.869 [6/265] Linking static target lib/librte_kvargs.a 00:13:00.869 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:13:00.869 [8/265] Linking static target lib/librte_log.a 00:13:00.869 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:13:01.128 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:13:01.128 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:13:01.431 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:13:01.431 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:13:01.431 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:13:01.431 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:13:01.431 [16/265] Linking static target lib/librte_telemetry.a 00:13:01.431 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:13:01.431 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:13:01.689 [19/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:13:01.689 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:13:01.689 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:13:01.689 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:13:01.689 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:13:01.947 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:13:01.947 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:13:01.947 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:13:02.206 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:13:02.206 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:13:02.206 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:13:02.206 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:13:02.206 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:13:02.206 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:13:02.464 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:13:02.723 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:13:02.723 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:13:02.723 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:13:02.723 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:13:02.723 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:13:02.723 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:13:02.981 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:13:02.981 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:13:02.981 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:13:02.981 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:13:03.239 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:13:03.240 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:13:03.240 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:13:03.240 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:13:03.240 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:13:03.497 [49/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:13:03.497 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:13:03.497 [51/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:13:03.497 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:13:03.497 [53/265] Linking target lib/librte_log.so.24.0 00:13:03.497 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:13:03.497 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:13:03.497 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:13:03.756 [57/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:13:03.756 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:13:03.756 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:13:03.756 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:13:03.756 [61/265] Linking target lib/librte_kvargs.so.24.0 00:13:03.756 [62/265] Linking target lib/librte_telemetry.so.24.0 00:13:04.014 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:13:04.014 [64/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:13:04.014 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:13:04.014 [66/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:13:04.014 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:13:04.014 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:13:04.272 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:13:04.272 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:13:04.272 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:13:04.272 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:13:04.272 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:13:04.531 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:13:04.531 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:13:04.531 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:13:04.531 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:13:04.531 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:13:04.789 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:13:04.789 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:13:05.047 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:13:05.047 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:13:05.047 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:13:05.047 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:13:05.047 [85/265] Linking static target lib/librte_ring.a 00:13:05.047 [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:13:05.047 [87/265] Linking static target lib/librte_eal.a 00:13:05.305 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:13:05.305 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:13:05.305 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:13:05.563 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:13:05.563 [92/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:13:05.563 [93/265] Linking static target lib/librte_rcu.a 00:13:05.563 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:13:05.821 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:13:05.821 [96/265] Linking static target lib/librte_mempool.a 00:13:06.081 [97/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:13:06.081 [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:13:06.081 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:13:06.338 [100/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:13:06.338 [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:13:06.338 [102/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:13:06.338 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:13:06.338 [104/265] Linking static target lib/librte_mbuf.a 00:13:06.596 [105/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:13:06.596 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:13:06.596 [107/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:13:06.853 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:13:06.853 [109/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:13:06.853 [110/265] Linking static target lib/librte_net.a 00:13:06.853 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:13:06.853 [112/265] Linking static target lib/librte_meter.a 00:13:07.111 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:13:07.111 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:13:07.368 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:13:07.368 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:13:07.368 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:13:07.680 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:13:07.680 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:07.937 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:13:07.937 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:08.195 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:08.195 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:08.195 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:08.195 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:08.195 [126/265] Linking static target lib/librte_pci.a 00:13:08.453 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:08.453 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:08.453 [129/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:13:08.453 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:08.453 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:08.711 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:08.711 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:08.711 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:08.711 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:08.711 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:08.711 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:08.711 [138/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:08.969 [139/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:08.969 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:08.969 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:08.969 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:08.969 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:08.969 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:08.969 [145/265] Linking static target lib/librte_cmdline.a 00:13:09.227 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:09.227 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:09.485 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:09.485 [149/265] Linking static target lib/librte_ethdev.a 00:13:09.485 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:09.485 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:09.743 [152/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:09.743 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:09.743 [154/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:09.743 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:09.743 [156/265] Linking static target lib/librte_timer.a 00:13:10.001 [157/265] Linking static target lib/librte_compressdev.a 00:13:10.001 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:10.001 [159/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:10.001 [160/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:10.001 [161/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:10.001 [162/265] Linking static target lib/librte_hash.a 00:13:10.259 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:10.259 [164/265] Linking static target lib/librte_dmadev.a 00:13:10.517 [165/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:13:10.517 [166/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:10.517 [167/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:10.517 [168/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:10.775 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:10.775 [170/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:11.033 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:11.033 [172/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:11.033 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:13:11.033 [174/265] Linking static target lib/librte_cryptodev.a 00:13:11.033 [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:13:11.291 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:13:11.291 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:13:11.291 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:13:11.291 [179/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:11.291 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:13:11.291 [181/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:11.549 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:13:11.549 [183/265] Linking static target lib/librte_power.a 00:13:11.808 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:11.808 [185/265] Linking static target lib/librte_reorder.a 00:13:11.808 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:13:11.808 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:13:11.808 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:13:12.066 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:12.066 [190/265] Linking static target lib/librte_security.a 00:13:12.066 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:12.328 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:13:12.328 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:13:12.593 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:13:12.593 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:13:12.854 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:12.854 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:13:12.854 [198/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:12.854 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:13.113 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:13.113 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:13:13.113 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:13.113 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:13:13.372 [204/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:13.372 [205/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:13.372 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:13:13.372 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:13.372 [208/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:13.372 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:13.630 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:13.630 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:13.630 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:13.630 [213/265] Linking static target drivers/librte_bus_vdev.a 00:13:13.630 [214/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:13.630 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:13.630 [216/265] Linking static target drivers/librte_bus_pci.a 00:13:13.889 [217/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:13.889 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:13.889 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:14.148 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:14.148 [221/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:14.148 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:14.148 [223/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:14.148 [224/265] Linking static target drivers/librte_mempool_ring.a 00:13:16.049 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:13:17.952 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:18.887 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:13:18.887 [228/265] Linking target lib/librte_eal.so.24.0 00:13:19.146 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:13:19.146 [230/265] Linking target lib/librte_ring.so.24.0 00:13:19.146 [231/265] Linking target lib/librte_pci.so.24.0 00:13:19.146 [232/265] Linking target drivers/librte_bus_vdev.so.24.0 00:13:19.146 [233/265] Linking target lib/librte_dmadev.so.24.0 00:13:19.146 [234/265] Linking target lib/librte_timer.so.24.0 00:13:19.146 [235/265] Linking target lib/librte_meter.so.24.0 00:13:19.146 [236/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:13:19.146 [237/265] Linking static target lib/librte_vhost.a 00:13:19.146 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:13:19.146 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:13:19.146 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:13:19.407 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:13:19.407 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:13:19.407 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:13:19.407 [244/265] Linking target lib/librte_rcu.so.24.0 00:13:19.407 [245/265] Linking target lib/librte_mempool.so.24.0 00:13:19.407 [246/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:13:19.407 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:13:19.672 [248/265] Linking target drivers/librte_mempool_ring.so.24.0 00:13:19.672 [249/265] Linking target lib/librte_mbuf.so.24.0 00:13:19.672 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:13:19.672 [251/265] Linking target lib/librte_net.so.24.0 00:13:19.672 [252/265] Linking target lib/librte_reorder.so.24.0 00:13:19.672 [253/265] Linking target lib/librte_cryptodev.so.24.0 00:13:19.937 [254/265] Linking target lib/librte_compressdev.so.24.0 00:13:19.937 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:13:19.937 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:13:19.937 [257/265] Linking target lib/librte_hash.so.24.0 00:13:19.937 [258/265] Linking target lib/librte_security.so.24.0 00:13:19.937 [259/265] Linking target lib/librte_cmdline.so.24.0 00:13:20.204 [260/265] Linking target lib/librte_ethdev.so.24.0 00:13:20.204 [261/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:13:20.204 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:13:20.473 [263/265] Linking target lib/librte_power.so.24.0 00:13:21.065 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:13:21.065 [265/265] Linking target lib/librte_vhost.so.24.0 00:13:21.065 INFO: autodetecting backend as ninja 00:13:21.065 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:13:22.456 CC lib/log/log.o 00:13:22.456 CC lib/log/log_flags.o 00:13:22.456 CC lib/log/log_deprecated.o 00:13:22.456 CC lib/ut_mock/mock.o 00:13:22.456 CC lib/ut/ut.o 00:13:22.456 LIB libspdk_ut_mock.a 00:13:22.456 LIB libspdk_log.a 00:13:22.714 SO libspdk_ut_mock.so.6.0 00:13:22.714 LIB libspdk_ut.a 00:13:22.714 SO libspdk_log.so.7.0 00:13:22.714 SO libspdk_ut.so.2.0 00:13:22.714 SYMLINK libspdk_ut_mock.so 00:13:22.714 SYMLINK libspdk_ut.so 00:13:22.714 SYMLINK libspdk_log.so 00:13:22.972 CC lib/util/base64.o 00:13:22.972 CC lib/util/cpuset.o 00:13:22.972 CC lib/util/bit_array.o 00:13:22.972 CC lib/util/crc16.o 00:13:22.972 CC lib/dma/dma.o 00:13:22.972 CXX lib/trace_parser/trace.o 00:13:22.972 CC lib/util/crc32c.o 00:13:22.972 CC lib/util/crc32.o 00:13:22.972 CC lib/ioat/ioat.o 00:13:23.230 CC lib/vfio_user/host/vfio_user_pci.o 00:13:23.230 CC lib/util/crc32_ieee.o 00:13:23.230 CC lib/vfio_user/host/vfio_user.o 00:13:23.230 CC lib/util/crc64.o 00:13:23.230 CC lib/util/dif.o 00:13:23.230 CC lib/util/fd.o 00:13:23.230 CC lib/util/file.o 00:13:23.230 LIB libspdk_dma.a 00:13:23.230 SO libspdk_dma.so.4.0 00:13:23.230 CC lib/util/hexlify.o 00:13:23.488 LIB libspdk_ioat.a 00:13:23.488 CC lib/util/iov.o 00:13:23.488 CC lib/util/math.o 00:13:23.488 SYMLINK libspdk_dma.so 00:13:23.488 CC lib/util/pipe.o 00:13:23.488 SO libspdk_ioat.so.7.0 00:13:23.488 CC lib/util/strerror_tls.o 00:13:23.488 CC lib/util/string.o 00:13:23.488 SYMLINK libspdk_ioat.so 00:13:23.488 CC lib/util/uuid.o 00:13:23.488 LIB libspdk_vfio_user.a 00:13:23.488 CC lib/util/fd_group.o 00:13:23.488 SO libspdk_vfio_user.so.5.0 00:13:23.488 CC lib/util/xor.o 00:13:23.745 CC lib/util/zipf.o 00:13:23.745 SYMLINK libspdk_vfio_user.so 00:13:24.004 LIB libspdk_util.a 00:13:24.004 SO libspdk_util.so.9.0 00:13:24.262 LIB libspdk_trace_parser.a 00:13:24.262 SO libspdk_trace_parser.so.5.0 00:13:24.262 SYMLINK libspdk_util.so 00:13:24.572 SYMLINK libspdk_trace_parser.so 00:13:24.572 CC lib/env_dpdk/env.o 00:13:24.572 CC lib/env_dpdk/memory.o 00:13:24.572 CC lib/env_dpdk/pci.o 00:13:24.572 CC lib/env_dpdk/threads.o 00:13:24.572 CC lib/env_dpdk/init.o 00:13:24.572 CC lib/idxd/idxd.o 00:13:24.572 CC lib/conf/conf.o 00:13:24.572 CC lib/json/json_parse.o 00:13:24.572 CC lib/vmd/vmd.o 00:13:24.572 CC lib/rdma/common.o 00:13:24.572 CC lib/env_dpdk/pci_ioat.o 00:13:24.831 LIB libspdk_conf.a 00:13:24.831 SO libspdk_conf.so.6.0 00:13:24.831 CC lib/json/json_util.o 00:13:24.831 CC lib/json/json_write.o 00:13:24.831 SYMLINK libspdk_conf.so 00:13:24.831 CC lib/env_dpdk/pci_virtio.o 00:13:24.831 CC lib/rdma/rdma_verbs.o 00:13:24.831 CC lib/env_dpdk/pci_vmd.o 00:13:24.831 CC lib/env_dpdk/pci_idxd.o 00:13:25.089 CC lib/vmd/led.o 00:13:25.089 CC lib/idxd/idxd_user.o 00:13:25.089 CC lib/env_dpdk/pci_event.o 00:13:25.089 CC lib/env_dpdk/sigbus_handler.o 00:13:25.089 LIB libspdk_rdma.a 00:13:25.089 LIB libspdk_json.a 00:13:25.089 CC lib/env_dpdk/pci_dpdk.o 00:13:25.089 SO libspdk_rdma.so.6.0 00:13:25.089 SO libspdk_json.so.6.0 00:13:25.347 CC lib/env_dpdk/pci_dpdk_2207.o 00:13:25.347 CC lib/env_dpdk/pci_dpdk_2211.o 00:13:25.347 SYMLINK libspdk_json.so 00:13:25.347 SYMLINK libspdk_rdma.so 00:13:25.347 LIB libspdk_vmd.a 00:13:25.347 SO libspdk_vmd.so.6.0 00:13:25.347 LIB libspdk_idxd.a 00:13:25.347 SYMLINK libspdk_vmd.so 00:13:25.347 SO libspdk_idxd.so.12.0 00:13:25.604 CC lib/jsonrpc/jsonrpc_server.o 00:13:25.605 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:13:25.605 CC lib/jsonrpc/jsonrpc_client.o 00:13:25.605 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:13:25.605 SYMLINK libspdk_idxd.so 00:13:25.866 LIB libspdk_jsonrpc.a 00:13:25.866 SO libspdk_jsonrpc.so.6.0 00:13:25.866 SYMLINK libspdk_jsonrpc.so 00:13:26.123 LIB libspdk_env_dpdk.a 00:13:26.123 CC lib/rpc/rpc.o 00:13:26.389 SO libspdk_env_dpdk.so.14.0 00:13:26.389 LIB libspdk_rpc.a 00:13:26.670 SO libspdk_rpc.so.6.0 00:13:26.670 SYMLINK libspdk_env_dpdk.so 00:13:26.670 SYMLINK libspdk_rpc.so 00:13:26.928 CC lib/notify/notify_rpc.o 00:13:26.928 CC lib/notify/notify.o 00:13:26.928 CC lib/trace/trace.o 00:13:26.928 CC lib/keyring/keyring.o 00:13:26.928 CC lib/keyring/keyring_rpc.o 00:13:26.928 CC lib/trace/trace_rpc.o 00:13:26.928 CC lib/trace/trace_flags.o 00:13:27.186 LIB libspdk_notify.a 00:13:27.186 LIB libspdk_keyring.a 00:13:27.186 SO libspdk_notify.so.6.0 00:13:27.186 SO libspdk_keyring.so.1.0 00:13:27.186 LIB libspdk_trace.a 00:13:27.186 SYMLINK libspdk_notify.so 00:13:27.186 SYMLINK libspdk_keyring.so 00:13:27.186 SO libspdk_trace.so.10.0 00:13:27.445 SYMLINK libspdk_trace.so 00:13:27.704 CC lib/thread/iobuf.o 00:13:27.704 CC lib/thread/thread.o 00:13:27.704 CC lib/sock/sock.o 00:13:27.704 CC lib/sock/sock_rpc.o 00:13:28.276 LIB libspdk_sock.a 00:13:28.276 SO libspdk_sock.so.9.0 00:13:28.276 SYMLINK libspdk_sock.so 00:13:28.536 CC lib/nvme/nvme_fabric.o 00:13:28.536 CC lib/nvme/nvme_ctrlr_cmd.o 00:13:28.536 CC lib/nvme/nvme_ctrlr.o 00:13:28.536 CC lib/nvme/nvme_ns_cmd.o 00:13:28.536 CC lib/nvme/nvme_ns.o 00:13:28.536 CC lib/nvme/nvme_pcie.o 00:13:28.536 CC lib/nvme/nvme.o 00:13:28.536 CC lib/nvme/nvme_pcie_common.o 00:13:28.536 CC lib/nvme/nvme_qpair.o 00:13:29.481 CC lib/nvme/nvme_quirks.o 00:13:29.481 CC lib/nvme/nvme_transport.o 00:13:29.481 LIB libspdk_thread.a 00:13:29.481 CC lib/nvme/nvme_discovery.o 00:13:29.481 SO libspdk_thread.so.10.0 00:13:29.481 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:13:29.481 SYMLINK libspdk_thread.so 00:13:29.481 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:13:29.481 CC lib/nvme/nvme_tcp.o 00:13:29.740 CC lib/nvme/nvme_opal.o 00:13:29.741 CC lib/nvme/nvme_io_msg.o 00:13:29.741 CC lib/accel/accel.o 00:13:29.999 CC lib/nvme/nvme_poll_group.o 00:13:29.999 CC lib/nvme/nvme_zns.o 00:13:30.256 CC lib/blob/blobstore.o 00:13:30.257 CC lib/init/json_config.o 00:13:30.257 CC lib/init/subsystem.o 00:13:30.257 CC lib/virtio/virtio.o 00:13:30.515 CC lib/virtio/virtio_vhost_user.o 00:13:30.515 CC lib/virtio/virtio_vfio_user.o 00:13:30.515 CC lib/init/subsystem_rpc.o 00:13:30.515 CC lib/init/rpc.o 00:13:30.773 CC lib/nvme/nvme_stubs.o 00:13:30.773 CC lib/blob/request.o 00:13:30.773 CC lib/blob/zeroes.o 00:13:30.773 CC lib/accel/accel_rpc.o 00:13:30.773 LIB libspdk_init.a 00:13:30.773 SO libspdk_init.so.5.0 00:13:30.773 CC lib/virtio/virtio_pci.o 00:13:31.031 CC lib/accel/accel_sw.o 00:13:31.031 SYMLINK libspdk_init.so 00:13:31.031 CC lib/nvme/nvme_auth.o 00:13:31.031 CC lib/blob/blob_bs_dev.o 00:13:31.031 CC lib/nvme/nvme_cuse.o 00:13:31.031 CC lib/nvme/nvme_rdma.o 00:13:31.289 LIB libspdk_virtio.a 00:13:31.289 LIB libspdk_accel.a 00:13:31.289 SO libspdk_virtio.so.7.0 00:13:31.289 SO libspdk_accel.so.15.0 00:13:31.289 SYMLINK libspdk_virtio.so 00:13:31.547 CC lib/event/app.o 00:13:31.547 CC lib/event/reactor.o 00:13:31.547 CC lib/event/log_rpc.o 00:13:31.547 CC lib/event/scheduler_static.o 00:13:31.547 CC lib/event/app_rpc.o 00:13:31.547 SYMLINK libspdk_accel.so 00:13:31.547 CC lib/bdev/bdev.o 00:13:31.547 CC lib/bdev/bdev_zone.o 00:13:31.547 CC lib/bdev/bdev_rpc.o 00:13:31.806 CC lib/bdev/part.o 00:13:32.085 CC lib/bdev/scsi_nvme.o 00:13:32.085 LIB libspdk_event.a 00:13:32.085 SO libspdk_event.so.13.0 00:13:32.344 SYMLINK libspdk_event.so 00:13:32.909 LIB libspdk_nvme.a 00:13:32.909 SO libspdk_nvme.so.13.0 00:13:33.476 SYMLINK libspdk_nvme.so 00:13:34.044 LIB libspdk_blob.a 00:13:34.044 SO libspdk_blob.so.11.0 00:13:34.303 SYMLINK libspdk_blob.so 00:13:34.562 CC lib/lvol/lvol.o 00:13:34.562 CC lib/blobfs/blobfs.o 00:13:34.562 CC lib/blobfs/tree.o 00:13:35.131 LIB libspdk_bdev.a 00:13:35.389 SO libspdk_bdev.so.15.0 00:13:35.389 SYMLINK libspdk_bdev.so 00:13:35.647 CC lib/nbd/nbd.o 00:13:35.647 CC lib/nvmf/ctrlr.o 00:13:35.647 CC lib/nvmf/ctrlr_discovery.o 00:13:35.647 CC lib/nvmf/ctrlr_bdev.o 00:13:35.647 CC lib/nbd/nbd_rpc.o 00:13:35.647 CC lib/scsi/dev.o 00:13:35.647 LIB libspdk_blobfs.a 00:13:35.647 CC lib/ftl/ftl_core.o 00:13:35.647 CC lib/ublk/ublk.o 00:13:35.647 SO libspdk_blobfs.so.10.0 00:13:35.647 LIB libspdk_lvol.a 00:13:35.647 SO libspdk_lvol.so.10.0 00:13:35.905 SYMLINK libspdk_blobfs.so 00:13:35.905 CC lib/ublk/ublk_rpc.o 00:13:35.905 SYMLINK libspdk_lvol.so 00:13:35.905 CC lib/scsi/lun.o 00:13:35.905 CC lib/scsi/port.o 00:13:35.905 CC lib/scsi/scsi.o 00:13:35.905 CC lib/scsi/scsi_bdev.o 00:13:35.905 CC lib/nvmf/subsystem.o 00:13:36.164 CC lib/ftl/ftl_init.o 00:13:36.164 CC lib/ftl/ftl_layout.o 00:13:36.164 LIB libspdk_nbd.a 00:13:36.164 CC lib/nvmf/nvmf.o 00:13:36.164 SO libspdk_nbd.so.7.0 00:13:36.164 CC lib/ftl/ftl_debug.o 00:13:36.164 SYMLINK libspdk_nbd.so 00:13:36.164 CC lib/ftl/ftl_io.o 00:13:36.422 CC lib/ftl/ftl_sb.o 00:13:36.422 LIB libspdk_ublk.a 00:13:36.422 CC lib/nvmf/nvmf_rpc.o 00:13:36.422 CC lib/scsi/scsi_pr.o 00:13:36.422 CC lib/ftl/ftl_l2p.o 00:13:36.422 SO libspdk_ublk.so.3.0 00:13:36.681 CC lib/scsi/scsi_rpc.o 00:13:36.681 CC lib/scsi/task.o 00:13:36.681 SYMLINK libspdk_ublk.so 00:13:36.681 CC lib/ftl/ftl_l2p_flat.o 00:13:36.681 CC lib/ftl/ftl_nv_cache.o 00:13:36.681 CC lib/ftl/ftl_band.o 00:13:36.681 CC lib/ftl/ftl_band_ops.o 00:13:36.940 CC lib/ftl/ftl_writer.o 00:13:36.940 CC lib/ftl/ftl_rq.o 00:13:36.940 LIB libspdk_scsi.a 00:13:36.940 SO libspdk_scsi.so.9.0 00:13:37.198 CC lib/ftl/ftl_reloc.o 00:13:37.198 SYMLINK libspdk_scsi.so 00:13:37.198 CC lib/nvmf/transport.o 00:13:37.198 CC lib/nvmf/tcp.o 00:13:37.198 CC lib/nvmf/rdma.o 00:13:37.456 CC lib/ftl/ftl_l2p_cache.o 00:13:37.456 CC lib/ftl/ftl_p2l.o 00:13:37.456 CC lib/ftl/mngt/ftl_mngt.o 00:13:37.456 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:13:37.715 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:13:37.715 CC lib/iscsi/conn.o 00:13:37.715 CC lib/ftl/mngt/ftl_mngt_startup.o 00:13:37.715 CC lib/iscsi/init_grp.o 00:13:37.715 CC lib/iscsi/iscsi.o 00:13:37.973 CC lib/iscsi/md5.o 00:13:37.973 CC lib/iscsi/param.o 00:13:37.973 CC lib/iscsi/portal_grp.o 00:13:37.973 CC lib/iscsi/tgt_node.o 00:13:37.973 CC lib/ftl/mngt/ftl_mngt_md.o 00:13:37.973 CC lib/ftl/mngt/ftl_mngt_misc.o 00:13:38.231 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:13:38.231 CC lib/iscsi/iscsi_subsystem.o 00:13:38.231 CC lib/iscsi/iscsi_rpc.o 00:13:38.490 CC lib/iscsi/task.o 00:13:38.490 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:13:38.490 CC lib/ftl/mngt/ftl_mngt_band.o 00:13:38.490 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:13:38.490 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:13:38.748 CC lib/vhost/vhost.o 00:13:38.748 CC lib/vhost/vhost_rpc.o 00:13:38.748 CC lib/vhost/vhost_scsi.o 00:13:38.748 CC lib/vhost/vhost_blk.o 00:13:38.748 CC lib/vhost/rte_vhost_user.o 00:13:38.748 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:13:39.007 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:13:39.265 CC lib/ftl/utils/ftl_conf.o 00:13:39.265 CC lib/ftl/utils/ftl_md.o 00:13:39.265 CC lib/ftl/utils/ftl_mempool.o 00:13:39.265 CC lib/ftl/utils/ftl_bitmap.o 00:13:39.524 CC lib/ftl/utils/ftl_property.o 00:13:39.524 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:13:39.524 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:13:39.524 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:13:39.783 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:13:39.783 LIB libspdk_iscsi.a 00:13:39.783 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:13:39.783 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:13:39.783 CC lib/ftl/upgrade/ftl_sb_v3.o 00:13:39.783 SO libspdk_iscsi.so.8.0 00:13:39.783 LIB libspdk_nvmf.a 00:13:39.783 CC lib/ftl/upgrade/ftl_sb_v5.o 00:13:39.783 CC lib/ftl/nvc/ftl_nvc_dev.o 00:13:39.783 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:13:40.042 CC lib/ftl/base/ftl_base_dev.o 00:13:40.042 SO libspdk_nvmf.so.18.0 00:13:40.042 LIB libspdk_vhost.a 00:13:40.042 CC lib/ftl/base/ftl_base_bdev.o 00:13:40.042 CC lib/ftl/ftl_trace.o 00:13:40.042 SYMLINK libspdk_iscsi.so 00:13:40.042 SO libspdk_vhost.so.8.0 00:13:40.300 SYMLINK libspdk_vhost.so 00:13:40.300 SYMLINK libspdk_nvmf.so 00:13:40.300 LIB libspdk_ftl.a 00:13:40.558 SO libspdk_ftl.so.9.0 00:13:41.125 SYMLINK libspdk_ftl.so 00:13:41.383 CC module/env_dpdk/env_dpdk_rpc.o 00:13:41.642 CC module/sock/posix/posix.o 00:13:41.642 CC module/blob/bdev/blob_bdev.o 00:13:41.642 CC module/accel/error/accel_error.o 00:13:41.642 CC module/keyring/file/keyring.o 00:13:41.642 CC module/accel/dsa/accel_dsa.o 00:13:41.642 CC module/scheduler/dynamic/scheduler_dynamic.o 00:13:41.642 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:13:41.642 CC module/accel/iaa/accel_iaa.o 00:13:41.642 CC module/accel/ioat/accel_ioat.o 00:13:41.642 LIB libspdk_env_dpdk_rpc.a 00:13:41.642 SO libspdk_env_dpdk_rpc.so.6.0 00:13:41.642 CC module/keyring/file/keyring_rpc.o 00:13:41.642 LIB libspdk_scheduler_dpdk_governor.a 00:13:41.642 SO libspdk_scheduler_dpdk_governor.so.4.0 00:13:41.642 SYMLINK libspdk_env_dpdk_rpc.so 00:13:41.900 CC module/accel/ioat/accel_ioat_rpc.o 00:13:41.900 LIB libspdk_scheduler_dynamic.a 00:13:41.900 CC module/accel/iaa/accel_iaa_rpc.o 00:13:41.900 CC module/accel/error/accel_error_rpc.o 00:13:41.900 SO libspdk_scheduler_dynamic.so.4.0 00:13:41.900 SYMLINK libspdk_scheduler_dpdk_governor.so 00:13:41.900 CC module/accel/dsa/accel_dsa_rpc.o 00:13:41.900 LIB libspdk_blob_bdev.a 00:13:41.900 SYMLINK libspdk_scheduler_dynamic.so 00:13:41.900 LIB libspdk_keyring_file.a 00:13:41.900 SO libspdk_blob_bdev.so.11.0 00:13:41.900 SO libspdk_keyring_file.so.1.0 00:13:41.900 LIB libspdk_accel_ioat.a 00:13:41.900 SYMLINK libspdk_blob_bdev.so 00:13:41.900 SO libspdk_accel_ioat.so.6.0 00:13:41.900 LIB libspdk_accel_error.a 00:13:41.900 SYMLINK libspdk_keyring_file.so 00:13:42.158 LIB libspdk_accel_iaa.a 00:13:42.158 LIB libspdk_accel_dsa.a 00:13:42.158 SO libspdk_accel_error.so.2.0 00:13:42.158 SO libspdk_accel_iaa.so.3.0 00:13:42.158 CC module/scheduler/gscheduler/gscheduler.o 00:13:42.158 SO libspdk_accel_dsa.so.5.0 00:13:42.158 SYMLINK libspdk_accel_ioat.so 00:13:42.158 SYMLINK libspdk_accel_error.so 00:13:42.158 SYMLINK libspdk_accel_iaa.so 00:13:42.158 SYMLINK libspdk_accel_dsa.so 00:13:42.158 LIB libspdk_scheduler_gscheduler.a 00:13:42.417 SO libspdk_scheduler_gscheduler.so.4.0 00:13:42.417 CC module/bdev/error/vbdev_error.o 00:13:42.417 CC module/bdev/delay/vbdev_delay.o 00:13:42.417 CC module/blobfs/bdev/blobfs_bdev.o 00:13:42.417 CC module/bdev/lvol/vbdev_lvol.o 00:13:42.417 CC module/bdev/gpt/gpt.o 00:13:42.417 CC module/bdev/null/bdev_null.o 00:13:42.417 CC module/bdev/malloc/bdev_malloc.o 00:13:42.417 SYMLINK libspdk_scheduler_gscheduler.so 00:13:42.417 CC module/bdev/nvme/bdev_nvme.o 00:13:42.417 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:13:42.677 LIB libspdk_sock_posix.a 00:13:42.677 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:13:42.677 SO libspdk_sock_posix.so.6.0 00:13:42.677 CC module/bdev/gpt/vbdev_gpt.o 00:13:42.677 CC module/bdev/error/vbdev_error_rpc.o 00:13:42.677 CC module/bdev/null/bdev_null_rpc.o 00:13:42.677 SYMLINK libspdk_sock_posix.so 00:13:42.677 CC module/bdev/nvme/bdev_nvme_rpc.o 00:13:42.677 LIB libspdk_blobfs_bdev.a 00:13:42.677 CC module/bdev/delay/vbdev_delay_rpc.o 00:13:42.935 SO libspdk_blobfs_bdev.so.6.0 00:13:42.935 CC module/bdev/nvme/nvme_rpc.o 00:13:42.935 CC module/bdev/malloc/bdev_malloc_rpc.o 00:13:42.935 LIB libspdk_bdev_error.a 00:13:42.935 SO libspdk_bdev_error.so.6.0 00:13:42.935 SYMLINK libspdk_blobfs_bdev.so 00:13:42.935 CC module/bdev/nvme/bdev_mdns_client.o 00:13:42.935 LIB libspdk_bdev_null.a 00:13:42.935 LIB libspdk_bdev_lvol.a 00:13:42.935 SO libspdk_bdev_null.so.6.0 00:13:42.935 LIB libspdk_bdev_delay.a 00:13:42.935 SO libspdk_bdev_lvol.so.6.0 00:13:42.935 SYMLINK libspdk_bdev_error.so 00:13:42.935 SO libspdk_bdev_delay.so.6.0 00:13:42.935 LIB libspdk_bdev_malloc.a 00:13:43.194 SYMLINK libspdk_bdev_null.so 00:13:43.194 CC module/bdev/nvme/vbdev_opal.o 00:13:43.194 SYMLINK libspdk_bdev_lvol.so 00:13:43.194 CC module/bdev/nvme/vbdev_opal_rpc.o 00:13:43.194 LIB libspdk_bdev_gpt.a 00:13:43.194 SO libspdk_bdev_malloc.so.6.0 00:13:43.194 SYMLINK libspdk_bdev_delay.so 00:13:43.194 SO libspdk_bdev_gpt.so.6.0 00:13:43.194 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:13:43.194 SYMLINK libspdk_bdev_malloc.so 00:13:43.194 SYMLINK libspdk_bdev_gpt.so 00:13:43.194 CC module/bdev/raid/bdev_raid.o 00:13:43.453 CC module/bdev/split/vbdev_split.o 00:13:43.453 CC module/bdev/passthru/vbdev_passthru.o 00:13:43.453 CC module/bdev/split/vbdev_split_rpc.o 00:13:43.453 CC module/bdev/raid/bdev_raid_rpc.o 00:13:43.453 CC module/bdev/raid/bdev_raid_sb.o 00:13:43.453 CC module/bdev/zone_block/vbdev_zone_block.o 00:13:43.453 CC module/bdev/xnvme/bdev_xnvme.o 00:13:43.453 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:13:43.453 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:13:43.711 LIB libspdk_bdev_split.a 00:13:43.711 SO libspdk_bdev_split.so.6.0 00:13:43.711 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:13:43.711 SYMLINK libspdk_bdev_split.so 00:13:43.711 CC module/bdev/raid/raid0.o 00:13:43.711 CC module/bdev/raid/raid1.o 00:13:43.711 CC module/bdev/raid/concat.o 00:13:43.711 LIB libspdk_bdev_xnvme.a 00:13:43.970 SO libspdk_bdev_xnvme.so.3.0 00:13:43.970 LIB libspdk_bdev_zone_block.a 00:13:43.970 CC module/bdev/aio/bdev_aio.o 00:13:43.970 SO libspdk_bdev_zone_block.so.6.0 00:13:43.970 LIB libspdk_bdev_passthru.a 00:13:43.970 SYMLINK libspdk_bdev_xnvme.so 00:13:43.970 SO libspdk_bdev_passthru.so.6.0 00:13:43.970 SYMLINK libspdk_bdev_zone_block.so 00:13:43.970 CC module/bdev/aio/bdev_aio_rpc.o 00:13:43.970 SYMLINK libspdk_bdev_passthru.so 00:13:43.970 CC module/bdev/ftl/bdev_ftl.o 00:13:43.970 CC module/bdev/ftl/bdev_ftl_rpc.o 00:13:44.228 CC module/bdev/iscsi/bdev_iscsi.o 00:13:44.228 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:13:44.228 CC module/bdev/virtio/bdev_virtio_scsi.o 00:13:44.228 CC module/bdev/virtio/bdev_virtio_blk.o 00:13:44.228 CC module/bdev/virtio/bdev_virtio_rpc.o 00:13:44.228 LIB libspdk_bdev_aio.a 00:13:44.228 SO libspdk_bdev_aio.so.6.0 00:13:44.487 LIB libspdk_bdev_ftl.a 00:13:44.487 SYMLINK libspdk_bdev_aio.so 00:13:44.487 SO libspdk_bdev_ftl.so.6.0 00:13:44.487 SYMLINK libspdk_bdev_ftl.so 00:13:44.487 LIB libspdk_bdev_iscsi.a 00:13:44.746 LIB libspdk_bdev_raid.a 00:13:44.746 SO libspdk_bdev_iscsi.so.6.0 00:13:44.746 SO libspdk_bdev_raid.so.6.0 00:13:44.746 SYMLINK libspdk_bdev_iscsi.so 00:13:44.746 LIB libspdk_bdev_virtio.a 00:13:45.005 SYMLINK libspdk_bdev_raid.so 00:13:45.005 SO libspdk_bdev_virtio.so.6.0 00:13:45.005 SYMLINK libspdk_bdev_virtio.so 00:13:45.573 LIB libspdk_bdev_nvme.a 00:13:45.573 SO libspdk_bdev_nvme.so.7.0 00:13:45.573 SYMLINK libspdk_bdev_nvme.so 00:13:46.142 CC module/event/subsystems/vmd/vmd.o 00:13:46.142 CC module/event/subsystems/vmd/vmd_rpc.o 00:13:46.142 CC module/event/subsystems/keyring/keyring.o 00:13:46.142 CC module/event/subsystems/iobuf/iobuf.o 00:13:46.142 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:13:46.142 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:13:46.142 CC module/event/subsystems/sock/sock.o 00:13:46.142 CC module/event/subsystems/scheduler/scheduler.o 00:13:46.400 LIB libspdk_event_sock.a 00:13:46.400 LIB libspdk_event_keyring.a 00:13:46.400 SO libspdk_event_sock.so.5.0 00:13:46.400 LIB libspdk_event_vhost_blk.a 00:13:46.400 LIB libspdk_event_vmd.a 00:13:46.400 LIB libspdk_event_iobuf.a 00:13:46.400 LIB libspdk_event_scheduler.a 00:13:46.400 SO libspdk_event_keyring.so.1.0 00:13:46.400 SO libspdk_event_scheduler.so.4.0 00:13:46.400 SO libspdk_event_vhost_blk.so.3.0 00:13:46.400 SO libspdk_event_vmd.so.6.0 00:13:46.400 SO libspdk_event_iobuf.so.3.0 00:13:46.400 SYMLINK libspdk_event_sock.so 00:13:46.400 SYMLINK libspdk_event_keyring.so 00:13:46.400 SYMLINK libspdk_event_vhost_blk.so 00:13:46.400 SYMLINK libspdk_event_scheduler.so 00:13:46.400 SYMLINK libspdk_event_vmd.so 00:13:46.400 SYMLINK libspdk_event_iobuf.so 00:13:47.060 CC module/event/subsystems/accel/accel.o 00:13:47.060 LIB libspdk_event_accel.a 00:13:47.060 SO libspdk_event_accel.so.6.0 00:13:47.060 SYMLINK libspdk_event_accel.so 00:13:47.628 CC module/event/subsystems/bdev/bdev.o 00:13:47.628 LIB libspdk_event_bdev.a 00:13:47.628 SO libspdk_event_bdev.so.6.0 00:13:47.887 SYMLINK libspdk_event_bdev.so 00:13:48.146 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:13:48.146 CC module/event/subsystems/scsi/scsi.o 00:13:48.146 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:13:48.146 CC module/event/subsystems/ublk/ublk.o 00:13:48.146 CC module/event/subsystems/nbd/nbd.o 00:13:48.146 LIB libspdk_event_ublk.a 00:13:48.146 LIB libspdk_event_nbd.a 00:13:48.146 LIB libspdk_event_scsi.a 00:13:48.404 SO libspdk_event_ublk.so.3.0 00:13:48.404 SO libspdk_event_nbd.so.6.0 00:13:48.404 SO libspdk_event_scsi.so.6.0 00:13:48.404 LIB libspdk_event_nvmf.a 00:13:48.404 SYMLINK libspdk_event_nbd.so 00:13:48.404 SYMLINK libspdk_event_ublk.so 00:13:48.404 SYMLINK libspdk_event_scsi.so 00:13:48.404 SO libspdk_event_nvmf.so.6.0 00:13:48.404 SYMLINK libspdk_event_nvmf.so 00:13:48.664 CC module/event/subsystems/iscsi/iscsi.o 00:13:48.664 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:13:48.923 LIB libspdk_event_vhost_scsi.a 00:13:48.923 LIB libspdk_event_iscsi.a 00:13:48.923 SO libspdk_event_vhost_scsi.so.3.0 00:13:48.923 SO libspdk_event_iscsi.so.6.0 00:13:48.923 SYMLINK libspdk_event_vhost_scsi.so 00:13:48.923 SYMLINK libspdk_event_iscsi.so 00:13:49.182 SO libspdk.so.6.0 00:13:49.182 SYMLINK libspdk.so 00:13:49.441 CXX app/trace/trace.o 00:13:49.700 CC examples/sock/hello_world/hello_sock.o 00:13:49.700 CC examples/accel/perf/accel_perf.o 00:13:49.700 CC examples/nvme/hello_world/hello_world.o 00:13:49.700 CC examples/ioat/perf/perf.o 00:13:49.700 CC examples/vmd/lsvmd/lsvmd.o 00:13:49.700 CC examples/nvmf/nvmf/nvmf.o 00:13:49.700 CC examples/bdev/hello_world/hello_bdev.o 00:13:49.700 CC examples/blob/hello_world/hello_blob.o 00:13:49.700 CC test/accel/dif/dif.o 00:13:49.700 LINK lsvmd 00:13:49.959 LINK hello_world 00:13:49.959 LINK spdk_trace 00:13:49.959 LINK ioat_perf 00:13:49.959 LINK hello_blob 00:13:49.959 LINK hello_sock 00:13:49.959 LINK hello_bdev 00:13:49.959 LINK nvmf 00:13:50.218 CC examples/vmd/led/led.o 00:13:50.218 CC examples/nvme/reconnect/reconnect.o 00:13:50.218 LINK dif 00:13:50.218 CC examples/ioat/verify/verify.o 00:13:50.218 LINK accel_perf 00:13:50.219 CC app/trace_record/trace_record.o 00:13:50.219 CC examples/blob/cli/blobcli.o 00:13:50.490 CC examples/bdev/bdevperf/bdevperf.o 00:13:50.490 LINK led 00:13:50.490 CC examples/util/zipf/zipf.o 00:13:50.490 LINK verify 00:13:50.490 CC test/app/bdev_svc/bdev_svc.o 00:13:50.490 CC test/app/histogram_perf/histogram_perf.o 00:13:50.775 LINK reconnect 00:13:50.775 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:13:50.775 CC test/app/jsoncat/jsoncat.o 00:13:50.775 LINK spdk_trace_record 00:13:50.775 LINK zipf 00:13:50.775 LINK bdev_svc 00:13:50.775 LINK histogram_perf 00:13:50.775 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:13:50.775 LINK jsoncat 00:13:50.775 LINK blobcli 00:13:51.034 CC examples/nvme/nvme_manage/nvme_manage.o 00:13:51.034 CC examples/nvme/arbitration/arbitration.o 00:13:51.034 CC app/nvmf_tgt/nvmf_main.o 00:13:51.034 CC app/iscsi_tgt/iscsi_tgt.o 00:13:51.034 CC app/spdk_lspci/spdk_lspci.o 00:13:51.034 CC app/spdk_tgt/spdk_tgt.o 00:13:51.034 LINK nvme_fuzz 00:13:51.292 CC app/spdk_nvme_perf/perf.o 00:13:51.292 LINK spdk_lspci 00:13:51.292 LINK nvmf_tgt 00:13:51.292 LINK bdevperf 00:13:51.292 LINK iscsi_tgt 00:13:51.292 LINK spdk_tgt 00:13:51.550 LINK arbitration 00:13:51.550 LINK nvme_manage 00:13:51.550 CC examples/thread/thread/thread_ex.o 00:13:51.550 CC examples/interrupt_tgt/interrupt_tgt.o 00:13:51.808 CC examples/nvme/hotplug/hotplug.o 00:13:51.808 CC examples/idxd/perf/perf.o 00:13:51.808 CC examples/nvme/cmb_copy/cmb_copy.o 00:13:51.808 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:13:51.808 CC app/spdk_nvme_identify/identify.o 00:13:51.808 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:13:51.808 LINK interrupt_tgt 00:13:51.808 LINK cmb_copy 00:13:52.067 LINK thread 00:13:52.067 CC examples/nvme/abort/abort.o 00:13:52.067 LINK hotplug 00:13:52.067 LINK idxd_perf 00:13:52.067 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:13:52.325 CC test/app/stub/stub.o 00:13:52.325 LINK vhost_fuzz 00:13:52.325 CC app/spdk_nvme_discover/discovery_aer.o 00:13:52.325 LINK spdk_nvme_perf 00:13:52.325 LINK stub 00:13:52.325 LINK pmr_persistence 00:13:52.584 CC test/bdev/bdevio/bdevio.o 00:13:52.584 LINK abort 00:13:52.584 CC test/blobfs/mkfs/mkfs.o 00:13:52.584 LINK spdk_nvme_discover 00:13:52.584 CC app/spdk_top/spdk_top.o 00:13:52.584 TEST_HEADER include/spdk/accel.h 00:13:52.584 TEST_HEADER include/spdk/accel_module.h 00:13:52.584 TEST_HEADER include/spdk/assert.h 00:13:52.584 TEST_HEADER include/spdk/barrier.h 00:13:52.584 TEST_HEADER include/spdk/base64.h 00:13:52.584 TEST_HEADER include/spdk/bdev.h 00:13:52.584 TEST_HEADER include/spdk/bdev_module.h 00:13:52.584 TEST_HEADER include/spdk/bdev_zone.h 00:13:52.584 TEST_HEADER include/spdk/bit_array.h 00:13:52.584 TEST_HEADER include/spdk/bit_pool.h 00:13:52.584 TEST_HEADER include/spdk/blob_bdev.h 00:13:52.584 TEST_HEADER include/spdk/blobfs_bdev.h 00:13:52.584 TEST_HEADER include/spdk/blobfs.h 00:13:52.584 TEST_HEADER include/spdk/blob.h 00:13:52.584 TEST_HEADER include/spdk/conf.h 00:13:52.584 TEST_HEADER include/spdk/config.h 00:13:52.584 TEST_HEADER include/spdk/cpuset.h 00:13:52.584 TEST_HEADER include/spdk/crc16.h 00:13:52.584 TEST_HEADER include/spdk/crc32.h 00:13:52.584 TEST_HEADER include/spdk/crc64.h 00:13:52.584 TEST_HEADER include/spdk/dif.h 00:13:52.584 TEST_HEADER include/spdk/dma.h 00:13:52.584 TEST_HEADER include/spdk/endian.h 00:13:52.584 TEST_HEADER include/spdk/env_dpdk.h 00:13:52.584 TEST_HEADER include/spdk/env.h 00:13:52.584 TEST_HEADER include/spdk/event.h 00:13:52.584 TEST_HEADER include/spdk/fd_group.h 00:13:52.843 TEST_HEADER include/spdk/fd.h 00:13:52.843 TEST_HEADER include/spdk/file.h 00:13:52.843 TEST_HEADER include/spdk/ftl.h 00:13:52.843 TEST_HEADER include/spdk/gpt_spec.h 00:13:52.843 TEST_HEADER include/spdk/hexlify.h 00:13:52.843 TEST_HEADER include/spdk/histogram_data.h 00:13:52.843 TEST_HEADER include/spdk/idxd.h 00:13:52.843 TEST_HEADER include/spdk/idxd_spec.h 00:13:52.843 TEST_HEADER include/spdk/init.h 00:13:52.843 TEST_HEADER include/spdk/ioat.h 00:13:52.843 TEST_HEADER include/spdk/ioat_spec.h 00:13:52.843 TEST_HEADER include/spdk/iscsi_spec.h 00:13:52.843 TEST_HEADER include/spdk/json.h 00:13:52.843 TEST_HEADER include/spdk/jsonrpc.h 00:13:52.843 TEST_HEADER include/spdk/keyring.h 00:13:52.843 TEST_HEADER include/spdk/keyring_module.h 00:13:52.843 TEST_HEADER include/spdk/likely.h 00:13:52.843 TEST_HEADER include/spdk/log.h 00:13:52.843 TEST_HEADER include/spdk/lvol.h 00:13:52.843 TEST_HEADER include/spdk/memory.h 00:13:52.843 TEST_HEADER include/spdk/mmio.h 00:13:52.843 TEST_HEADER include/spdk/nbd.h 00:13:52.843 TEST_HEADER include/spdk/notify.h 00:13:52.843 TEST_HEADER include/spdk/nvme.h 00:13:52.843 TEST_HEADER include/spdk/nvme_intel.h 00:13:52.843 LINK mkfs 00:13:52.843 TEST_HEADER include/spdk/nvme_ocssd.h 00:13:52.843 CC app/vhost/vhost.o 00:13:52.843 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:13:52.843 TEST_HEADER include/spdk/nvme_spec.h 00:13:52.843 TEST_HEADER include/spdk/nvme_zns.h 00:13:52.843 TEST_HEADER include/spdk/nvmf_cmd.h 00:13:52.843 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:13:52.843 TEST_HEADER include/spdk/nvmf.h 00:13:52.843 TEST_HEADER include/spdk/nvmf_spec.h 00:13:52.843 TEST_HEADER include/spdk/nvmf_transport.h 00:13:52.843 CC app/spdk_dd/spdk_dd.o 00:13:52.843 TEST_HEADER include/spdk/opal.h 00:13:52.843 LINK spdk_nvme_identify 00:13:52.843 TEST_HEADER include/spdk/opal_spec.h 00:13:52.843 TEST_HEADER include/spdk/pci_ids.h 00:13:52.843 TEST_HEADER include/spdk/pipe.h 00:13:52.843 TEST_HEADER include/spdk/queue.h 00:13:52.843 TEST_HEADER include/spdk/reduce.h 00:13:52.843 TEST_HEADER include/spdk/rpc.h 00:13:52.843 TEST_HEADER include/spdk/scheduler.h 00:13:52.843 TEST_HEADER include/spdk/scsi.h 00:13:52.843 TEST_HEADER include/spdk/scsi_spec.h 00:13:52.843 TEST_HEADER include/spdk/sock.h 00:13:52.843 TEST_HEADER include/spdk/stdinc.h 00:13:52.844 TEST_HEADER include/spdk/string.h 00:13:52.844 TEST_HEADER include/spdk/thread.h 00:13:52.844 TEST_HEADER include/spdk/trace.h 00:13:52.844 TEST_HEADER include/spdk/trace_parser.h 00:13:52.844 TEST_HEADER include/spdk/tree.h 00:13:52.844 TEST_HEADER include/spdk/ublk.h 00:13:52.844 TEST_HEADER include/spdk/util.h 00:13:52.844 TEST_HEADER include/spdk/uuid.h 00:13:52.844 TEST_HEADER include/spdk/version.h 00:13:52.844 TEST_HEADER include/spdk/vfio_user_pci.h 00:13:52.844 TEST_HEADER include/spdk/vfio_user_spec.h 00:13:52.844 TEST_HEADER include/spdk/vhost.h 00:13:52.844 TEST_HEADER include/spdk/vmd.h 00:13:52.844 TEST_HEADER include/spdk/xor.h 00:13:52.844 TEST_HEADER include/spdk/zipf.h 00:13:52.844 CXX test/cpp_headers/accel.o 00:13:52.844 LINK bdevio 00:13:52.844 CC app/fio/nvme/fio_plugin.o 00:13:53.101 CXX test/cpp_headers/accel_module.o 00:13:53.101 CXX test/cpp_headers/assert.o 00:13:53.101 LINK iscsi_fuzz 00:13:53.101 LINK vhost 00:13:53.101 CC test/dma/test_dma/test_dma.o 00:13:53.101 CXX test/cpp_headers/barrier.o 00:13:53.441 CXX test/cpp_headers/base64.o 00:13:53.441 LINK spdk_dd 00:13:53.441 CC app/fio/bdev/fio_plugin.o 00:13:53.441 CXX test/cpp_headers/bdev.o 00:13:53.441 CC test/env/vtophys/vtophys.o 00:13:53.441 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:13:53.441 CC test/env/memory/memory_ut.o 00:13:53.441 CC test/env/mem_callbacks/mem_callbacks.o 00:13:53.700 LINK test_dma 00:13:53.700 CXX test/cpp_headers/bdev_module.o 00:13:53.700 LINK vtophys 00:13:53.700 LINK env_dpdk_post_init 00:13:53.700 LINK spdk_top 00:13:53.700 LINK spdk_nvme 00:13:53.700 CC test/env/pci/pci_ut.o 00:13:53.960 CXX test/cpp_headers/bdev_zone.o 00:13:53.960 LINK spdk_bdev 00:13:53.960 CC test/rpc_client/rpc_client_test.o 00:13:53.960 CC test/nvme/aer/aer.o 00:13:54.218 CC test/event/event_perf/event_perf.o 00:13:54.218 CXX test/cpp_headers/bit_array.o 00:13:54.218 CXX test/cpp_headers/bit_pool.o 00:13:54.218 CC test/thread/poller_perf/poller_perf.o 00:13:54.218 LINK mem_callbacks 00:13:54.218 CC test/lvol/esnap/esnap.o 00:13:54.218 LINK event_perf 00:13:54.218 LINK rpc_client_test 00:13:54.218 CXX test/cpp_headers/blob_bdev.o 00:13:54.218 LINK poller_perf 00:13:54.218 CXX test/cpp_headers/blobfs_bdev.o 00:13:54.475 LINK pci_ut 00:13:54.475 CC test/nvme/reset/reset.o 00:13:54.475 LINK aer 00:13:54.475 LINK memory_ut 00:13:54.475 CC test/event/reactor/reactor.o 00:13:54.475 CC test/event/reactor_perf/reactor_perf.o 00:13:54.734 CC test/event/app_repeat/app_repeat.o 00:13:54.734 CXX test/cpp_headers/blobfs.o 00:13:54.734 LINK reset 00:13:54.734 LINK reactor 00:13:54.734 LINK reactor_perf 00:13:54.734 CC test/event/scheduler/scheduler.o 00:13:54.734 CC test/nvme/sgl/sgl.o 00:13:54.734 LINK app_repeat 00:13:54.734 CXX test/cpp_headers/blob.o 00:13:54.734 CC test/nvme/e2edp/nvme_dp.o 00:13:54.992 CC test/nvme/overhead/overhead.o 00:13:54.992 CXX test/cpp_headers/conf.o 00:13:54.992 LINK scheduler 00:13:54.992 CXX test/cpp_headers/config.o 00:13:54.992 CC test/nvme/startup/startup.o 00:13:54.992 CC test/nvme/err_injection/err_injection.o 00:13:54.992 CXX test/cpp_headers/cpuset.o 00:13:54.992 LINK sgl 00:13:54.992 CXX test/cpp_headers/crc16.o 00:13:55.249 CC test/nvme/reserve/reserve.o 00:13:55.249 LINK nvme_dp 00:13:55.249 LINK startup 00:13:55.249 LINK err_injection 00:13:55.249 CXX test/cpp_headers/crc32.o 00:13:55.249 LINK overhead 00:13:55.249 CXX test/cpp_headers/crc64.o 00:13:55.249 CC test/nvme/simple_copy/simple_copy.o 00:13:55.508 LINK reserve 00:13:55.508 CC test/nvme/connect_stress/connect_stress.o 00:13:55.508 CC test/nvme/boot_partition/boot_partition.o 00:13:55.508 CXX test/cpp_headers/dif.o 00:13:55.508 CC test/nvme/compliance/nvme_compliance.o 00:13:55.508 CC test/nvme/fused_ordering/fused_ordering.o 00:13:55.508 CC test/nvme/doorbell_aers/doorbell_aers.o 00:13:55.508 CC test/nvme/fdp/fdp.o 00:13:55.508 LINK connect_stress 00:13:55.508 LINK simple_copy 00:13:55.766 LINK boot_partition 00:13:55.766 CXX test/cpp_headers/dma.o 00:13:55.766 LINK fused_ordering 00:13:55.766 CC test/nvme/cuse/cuse.o 00:13:55.766 LINK doorbell_aers 00:13:55.766 CXX test/cpp_headers/endian.o 00:13:55.766 CXX test/cpp_headers/env_dpdk.o 00:13:55.766 CXX test/cpp_headers/env.o 00:13:55.766 CXX test/cpp_headers/event.o 00:13:56.024 LINK nvme_compliance 00:13:56.024 CXX test/cpp_headers/fd_group.o 00:13:56.024 LINK fdp 00:13:56.024 CXX test/cpp_headers/fd.o 00:13:56.024 CXX test/cpp_headers/file.o 00:13:56.024 CXX test/cpp_headers/ftl.o 00:13:56.024 CXX test/cpp_headers/gpt_spec.o 00:13:56.024 CXX test/cpp_headers/hexlify.o 00:13:56.024 CXX test/cpp_headers/histogram_data.o 00:13:56.024 CXX test/cpp_headers/idxd.o 00:13:56.024 CXX test/cpp_headers/idxd_spec.o 00:13:56.283 CXX test/cpp_headers/init.o 00:13:56.283 CXX test/cpp_headers/ioat.o 00:13:56.283 CXX test/cpp_headers/ioat_spec.o 00:13:56.283 CXX test/cpp_headers/iscsi_spec.o 00:13:56.283 CXX test/cpp_headers/json.o 00:13:56.283 CXX test/cpp_headers/jsonrpc.o 00:13:56.283 CXX test/cpp_headers/keyring.o 00:13:56.283 CXX test/cpp_headers/keyring_module.o 00:13:56.283 CXX test/cpp_headers/likely.o 00:13:56.283 CXX test/cpp_headers/log.o 00:13:56.283 CXX test/cpp_headers/lvol.o 00:13:56.541 CXX test/cpp_headers/memory.o 00:13:56.541 CXX test/cpp_headers/mmio.o 00:13:56.541 CXX test/cpp_headers/nbd.o 00:13:56.541 CXX test/cpp_headers/notify.o 00:13:56.541 CXX test/cpp_headers/nvme.o 00:13:56.541 CXX test/cpp_headers/nvme_intel.o 00:13:56.541 CXX test/cpp_headers/nvme_ocssd_spec.o 00:13:56.541 CXX test/cpp_headers/nvme_ocssd.o 00:13:56.541 CXX test/cpp_headers/nvme_spec.o 00:13:56.541 CXX test/cpp_headers/nvme_zns.o 00:13:56.541 CXX test/cpp_headers/nvmf_cmd.o 00:13:56.799 CXX test/cpp_headers/nvmf_fc_spec.o 00:13:56.799 CXX test/cpp_headers/nvmf.o 00:13:56.799 CXX test/cpp_headers/nvmf_spec.o 00:13:56.799 CXX test/cpp_headers/nvmf_transport.o 00:13:56.799 CXX test/cpp_headers/opal.o 00:13:56.799 CXX test/cpp_headers/opal_spec.o 00:13:56.799 CXX test/cpp_headers/pci_ids.o 00:13:56.799 CXX test/cpp_headers/pipe.o 00:13:57.057 LINK cuse 00:13:57.057 CXX test/cpp_headers/queue.o 00:13:57.057 CXX test/cpp_headers/reduce.o 00:13:57.057 CXX test/cpp_headers/rpc.o 00:13:57.057 CXX test/cpp_headers/scheduler.o 00:13:57.057 CXX test/cpp_headers/scsi_spec.o 00:13:57.057 CXX test/cpp_headers/scsi.o 00:13:57.057 CXX test/cpp_headers/sock.o 00:13:57.057 CXX test/cpp_headers/stdinc.o 00:13:57.057 CXX test/cpp_headers/string.o 00:13:57.315 CXX test/cpp_headers/thread.o 00:13:57.315 CXX test/cpp_headers/trace.o 00:13:57.315 CXX test/cpp_headers/trace_parser.o 00:13:57.315 CXX test/cpp_headers/tree.o 00:13:57.315 CXX test/cpp_headers/ublk.o 00:13:57.315 CXX test/cpp_headers/util.o 00:13:57.315 CXX test/cpp_headers/uuid.o 00:13:57.315 CXX test/cpp_headers/version.o 00:13:57.315 CXX test/cpp_headers/vfio_user_pci.o 00:13:57.315 CXX test/cpp_headers/vfio_user_spec.o 00:13:57.315 CXX test/cpp_headers/vhost.o 00:13:57.315 CXX test/cpp_headers/vmd.o 00:13:57.315 CXX test/cpp_headers/xor.o 00:13:57.315 CXX test/cpp_headers/zipf.o 00:14:00.599 LINK esnap 00:14:00.857 00:14:00.857 real 1m14.432s 00:14:00.857 user 6m46.396s 00:14:00.857 sys 1m58.143s 00:14:00.857 14:33:09 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:14:00.857 ************************************ 00:14:00.857 END TEST make 00:14:00.857 ************************************ 00:14:00.857 14:33:09 -- common/autotest_common.sh@10 -- $ set +x 00:14:00.857 14:33:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:14:00.857 14:33:09 -- pm/common@30 -- $ signal_monitor_resources TERM 00:14:00.857 14:33:09 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:14:00.857 14:33:09 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:00.857 14:33:09 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:14:00.857 14:33:09 -- pm/common@45 -- $ pid=5171 00:14:00.857 14:33:09 -- pm/common@52 -- $ sudo kill -TERM 5171 00:14:00.857 14:33:09 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:00.857 14:33:09 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:14:00.857 14:33:09 -- pm/common@45 -- $ pid=5172 00:14:00.857 14:33:09 -- pm/common@52 -- $ sudo kill -TERM 5172 00:14:01.116 14:33:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:01.116 14:33:09 -- nvmf/common.sh@7 -- # uname -s 00:14:01.116 14:33:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.116 14:33:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.116 14:33:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.116 14:33:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.116 14:33:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.116 14:33:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.116 14:33:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.116 14:33:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.116 14:33:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.116 14:33:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.116 14:33:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:441f5a21-7e81-426f-9e72-ee510eac3546 00:14:01.116 14:33:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=441f5a21-7e81-426f-9e72-ee510eac3546 00:14:01.116 14:33:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.116 14:33:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.116 14:33:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:01.116 14:33:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.116 14:33:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:01.116 14:33:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.116 14:33:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.116 14:33:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.117 14:33:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.117 14:33:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.117 14:33:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.117 14:33:09 -- paths/export.sh@5 -- # export PATH 00:14:01.117 14:33:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.117 14:33:09 -- nvmf/common.sh@47 -- # : 0 00:14:01.117 14:33:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.117 14:33:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.117 14:33:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.117 14:33:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.117 14:33:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.117 14:33:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.117 14:33:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.117 14:33:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.117 14:33:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:14:01.117 14:33:09 -- spdk/autotest.sh@32 -- # uname -s 00:14:01.117 14:33:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:14:01.117 14:33:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:14:01.117 14:33:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:01.117 14:33:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:14:01.117 14:33:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:01.117 14:33:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:14:01.117 14:33:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:14:01.117 14:33:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:14:01.117 14:33:09 -- spdk/autotest.sh@48 -- # udevadm_pid=53076 00:14:01.117 14:33:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:14:01.117 14:33:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:14:01.117 14:33:09 -- pm/common@17 -- # local monitor 00:14:01.117 14:33:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:01.117 14:33:09 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53078 00:14:01.117 14:33:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:01.117 14:33:09 -- pm/common@21 -- # date +%s 00:14:01.117 14:33:09 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53080 00:14:01.117 14:33:09 -- pm/common@26 -- # sleep 1 00:14:01.117 14:33:09 -- pm/common@21 -- # date +%s 00:14:01.117 14:33:09 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713364389 00:14:01.117 14:33:09 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713364389 00:14:01.117 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713364389_collect-vmstat.pm.log 00:14:01.117 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713364389_collect-cpu-load.pm.log 00:14:02.049 14:33:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:14:02.049 14:33:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:14:02.049 14:33:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:02.049 14:33:10 -- common/autotest_common.sh@10 -- # set +x 00:14:02.049 14:33:10 -- spdk/autotest.sh@59 -- # create_test_list 00:14:02.049 14:33:10 -- common/autotest_common.sh@734 -- # xtrace_disable 00:14:02.049 14:33:10 -- common/autotest_common.sh@10 -- # set +x 00:14:02.307 14:33:10 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:14:02.307 14:33:10 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:14:02.307 14:33:10 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:14:02.307 14:33:10 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:14:02.307 14:33:10 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:14:02.307 14:33:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:14:02.307 14:33:10 -- common/autotest_common.sh@1441 -- # uname 00:14:02.307 14:33:10 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:14:02.307 14:33:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:14:02.307 14:33:10 -- common/autotest_common.sh@1461 -- # uname 00:14:02.307 14:33:10 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:14:02.307 14:33:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:14:02.307 14:33:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:14:02.307 14:33:10 -- spdk/autotest.sh@72 -- # hash lcov 00:14:02.307 14:33:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:14:02.307 14:33:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:14:02.307 --rc lcov_branch_coverage=1 00:14:02.307 --rc lcov_function_coverage=1 00:14:02.307 --rc genhtml_branch_coverage=1 00:14:02.307 --rc genhtml_function_coverage=1 00:14:02.307 --rc genhtml_legend=1 00:14:02.307 --rc geninfo_all_blocks=1 00:14:02.307 ' 00:14:02.307 14:33:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:14:02.307 --rc lcov_branch_coverage=1 00:14:02.307 --rc lcov_function_coverage=1 00:14:02.307 --rc genhtml_branch_coverage=1 00:14:02.307 --rc genhtml_function_coverage=1 00:14:02.307 --rc genhtml_legend=1 00:14:02.307 --rc geninfo_all_blocks=1 00:14:02.307 ' 00:14:02.307 14:33:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:14:02.307 --rc lcov_branch_coverage=1 00:14:02.307 --rc lcov_function_coverage=1 00:14:02.307 --rc genhtml_branch_coverage=1 00:14:02.307 --rc genhtml_function_coverage=1 00:14:02.307 --rc genhtml_legend=1 00:14:02.307 --rc geninfo_all_blocks=1 00:14:02.307 --no-external' 00:14:02.307 14:33:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:14:02.307 --rc lcov_branch_coverage=1 00:14:02.307 --rc lcov_function_coverage=1 00:14:02.307 --rc genhtml_branch_coverage=1 00:14:02.307 --rc genhtml_function_coverage=1 00:14:02.307 --rc genhtml_legend=1 00:14:02.307 --rc geninfo_all_blocks=1 00:14:02.307 --no-external' 00:14:02.307 14:33:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:14:02.307 lcov: LCOV version 1.14 00:14:02.307 14:33:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:14:12.312 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:14:12.312 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:14:12.312 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:14:12.312 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:14:12.312 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:14:12.312 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:14:18.963 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:14:18.963 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:14:31.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:14:31.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:14:31.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:14:31.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:14:31.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:14:31.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:14:31.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:14:31.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:14:31.166 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:14:31.166 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:14:31.425 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:14:31.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:14:31.684 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:14:31.684 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:14:35.902 14:33:43 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:14:35.902 14:33:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:35.902 14:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:35.902 14:33:43 -- spdk/autotest.sh@91 -- # rm -f 00:14:35.902 14:33:43 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:36.159 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:36.417 lsblk: /dev/nvme3c3n1: not a block device 00:14:36.675 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:14:36.675 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:14:36.675 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:14:36.675 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:14:36.675 14:33:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:14:36.675 14:33:45 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:36.675 14:33:45 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:36.675 14:33:45 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:36.675 14:33:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:36.675 14:33:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:36.675 14:33:45 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:36.675 14:33:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:36.675 14:33:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:36.675 14:33:45 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:36.675 14:33:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:36.675 14:33:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:14:36.675 14:33:45 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:14:36.675 14:33:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:36.675 14:33:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:14:36.675 14:33:45 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:14:36.675 14:33:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:36.675 14:33:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:14:36.675 14:33:45 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:14:36.675 14:33:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:36.675 14:33:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:14:36.675 14:33:45 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:14:36.675 14:33:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:36.675 14:33:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:14:36.675 14:33:45 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:14:36.675 14:33:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:36.675 14:33:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:36.675 14:33:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:14:36.675 14:33:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:36.675 14:33:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:36.675 14:33:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:14:36.675 14:33:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:14:36.675 14:33:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:36.936 No valid GPT data, bailing 00:14:36.936 14:33:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:36.936 14:33:45 -- scripts/common.sh@391 -- # pt= 00:14:36.936 14:33:45 -- scripts/common.sh@392 -- # return 1 00:14:36.936 14:33:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:14:36.936 1+0 records in 00:14:36.936 1+0 records out 00:14:36.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00373803 s, 281 MB/s 00:14:36.936 14:33:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:36.936 14:33:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:36.936 14:33:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:14:36.936 14:33:45 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:14:36.936 14:33:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:14:36.936 No valid GPT data, bailing 00:14:36.936 14:33:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:36.936 14:33:45 -- scripts/common.sh@391 -- # pt= 00:14:36.936 14:33:45 -- scripts/common.sh@392 -- # return 1 00:14:36.936 14:33:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:14:36.936 1+0 records in 00:14:36.936 1+0 records out 00:14:36.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134158 s, 78.2 MB/s 00:14:36.936 14:33:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:36.936 14:33:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:36.936 14:33:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:14:36.936 14:33:45 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:14:36.936 14:33:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:14:36.936 No valid GPT data, bailing 00:14:36.936 14:33:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:14:36.936 14:33:45 -- scripts/common.sh@391 -- # pt= 00:14:36.936 14:33:45 -- scripts/common.sh@392 -- # return 1 00:14:36.936 14:33:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:14:36.936 1+0 records in 00:14:36.936 1+0 records out 00:14:36.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456217 s, 230 MB/s 00:14:36.936 14:33:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:36.936 14:33:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:36.936 14:33:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:14:36.936 14:33:45 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:14:36.936 14:33:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:14:37.198 No valid GPT data, bailing 00:14:37.198 14:33:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:14:37.198 14:33:45 -- scripts/common.sh@391 -- # pt= 00:14:37.198 14:33:45 -- scripts/common.sh@392 -- # return 1 00:14:37.198 14:33:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:14:37.198 1+0 records in 00:14:37.198 1+0 records out 00:14:37.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00487797 s, 215 MB/s 00:14:37.198 14:33:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:37.198 14:33:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:37.198 14:33:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:14:37.198 14:33:45 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:14:37.198 14:33:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:14:37.198 No valid GPT data, bailing 00:14:37.198 14:33:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:14:37.198 14:33:45 -- scripts/common.sh@391 -- # pt= 00:14:37.198 14:33:45 -- scripts/common.sh@392 -- # return 1 00:14:37.198 14:33:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:14:37.198 1+0 records in 00:14:37.198 1+0 records out 00:14:37.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00369082 s, 284 MB/s 00:14:37.199 14:33:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:37.199 14:33:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:37.199 14:33:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:14:37.199 14:33:45 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:14:37.199 14:33:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:14:37.199 No valid GPT data, bailing 00:14:37.199 14:33:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:14:37.199 14:33:45 -- scripts/common.sh@391 -- # pt= 00:14:37.199 14:33:45 -- scripts/common.sh@392 -- # return 1 00:14:37.199 14:33:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:14:37.199 1+0 records in 00:14:37.199 1+0 records out 00:14:37.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0036147 s, 290 MB/s 00:14:37.199 14:33:45 -- spdk/autotest.sh@118 -- # sync 00:14:37.199 14:33:45 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:14:37.199 14:33:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:14:37.199 14:33:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:14:39.154 14:33:47 -- spdk/autotest.sh@124 -- # uname -s 00:14:39.154 14:33:47 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:14:39.154 14:33:47 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:39.154 14:33:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:39.154 14:33:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:39.154 14:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:39.154 ************************************ 00:14:39.154 START TEST setup.sh 00:14:39.154 ************************************ 00:14:39.154 14:33:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:39.154 * Looking for test storage... 00:14:39.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:39.154 14:33:47 -- setup/test-setup.sh@10 -- # uname -s 00:14:39.154 14:33:47 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:14:39.154 14:33:47 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:39.154 14:33:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:39.154 14:33:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:39.154 14:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:39.154 ************************************ 00:14:39.154 START TEST acl 00:14:39.154 ************************************ 00:14:39.154 14:33:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:39.412 * Looking for test storage... 00:14:39.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:39.412 14:33:47 -- setup/acl.sh@10 -- # get_zoned_devs 00:14:39.412 14:33:47 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:39.412 14:33:47 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:39.412 14:33:47 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:39.412 14:33:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.412 14:33:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:39.412 14:33:47 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:39.412 14:33:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.412 14:33:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:39.412 14:33:47 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:39.412 14:33:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.412 14:33:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:14:39.412 14:33:47 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:14:39.412 14:33:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.412 14:33:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:14:39.412 14:33:47 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:14:39.412 14:33:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.412 14:33:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:14:39.412 14:33:47 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:14:39.412 14:33:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.412 14:33:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:14:39.412 14:33:47 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:14:39.412 14:33:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.412 14:33:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:14:39.412 14:33:47 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:14:39.412 14:33:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:39.412 14:33:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.412 14:33:47 -- setup/acl.sh@12 -- # devs=() 00:14:39.412 14:33:47 -- setup/acl.sh@12 -- # declare -a devs 00:14:39.412 14:33:47 -- setup/acl.sh@13 -- # drivers=() 00:14:39.412 14:33:47 -- setup/acl.sh@13 -- # declare -A drivers 00:14:39.412 14:33:47 -- setup/acl.sh@51 -- # setup reset 00:14:39.412 14:33:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:39.412 14:33:47 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:40.788 14:33:49 -- setup/acl.sh@52 -- # collect_setup_devs 00:14:40.788 14:33:49 -- setup/acl.sh@16 -- # local dev driver 00:14:40.788 14:33:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:40.788 14:33:49 -- setup/acl.sh@15 -- # setup output status 00:14:40.788 14:33:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:40.788 14:33:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:41.354 14:33:49 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:14:41.354 14:33:49 -- setup/acl.sh@19 -- # continue 00:14:41.354 14:33:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:41.612 lsblk: /dev/nvme3c3n1: not a block device 00:14:41.916 Hugepages 00:14:41.916 node hugesize free / total 00:14:41.916 14:33:50 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:14:41.916 14:33:50 -- setup/acl.sh@19 -- # continue 00:14:41.916 14:33:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:41.916 00:14:41.916 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:41.916 14:33:50 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:14:41.916 14:33:50 -- setup/acl.sh@19 -- # continue 00:14:41.916 14:33:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:41.916 14:33:50 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:14:41.916 14:33:50 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:14:41.916 14:33:50 -- setup/acl.sh@20 -- # continue 00:14:41.916 14:33:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:41.916 14:33:50 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:14:41.916 14:33:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:41.916 14:33:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:41.916 14:33:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:41.916 14:33:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:41.916 14:33:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:42.175 14:33:50 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:14:42.175 14:33:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:42.175 14:33:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:42.175 14:33:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:42.175 14:33:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:42.175 14:33:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:42.175 14:33:50 -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:14:42.175 14:33:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:42.175 14:33:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:14:42.175 14:33:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:42.175 14:33:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:42.175 14:33:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:42.175 14:33:50 -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:14:42.175 14:33:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:42.175 14:33:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:14:42.175 14:33:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:42.175 14:33:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:42.175 14:33:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:42.175 14:33:50 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:14:42.175 14:33:50 -- setup/acl.sh@54 -- # run_test denied denied 00:14:42.175 14:33:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:42.175 14:33:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:42.175 14:33:50 -- common/autotest_common.sh@10 -- # set +x 00:14:42.434 ************************************ 00:14:42.434 START TEST denied 00:14:42.434 ************************************ 00:14:42.434 14:33:50 -- common/autotest_common.sh@1111 -- # denied 00:14:42.434 14:33:50 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:14:42.434 14:33:50 -- setup/acl.sh@38 -- # setup output config 00:14:42.434 14:33:50 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:14:42.434 14:33:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:42.434 14:33:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:43.368 lsblk: /dev/nvme3c3n1: not a block device 00:14:43.935 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:14:43.935 14:33:52 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:14:43.935 14:33:52 -- setup/acl.sh@28 -- # local dev driver 00:14:43.935 14:33:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:14:43.935 14:33:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:14:43.935 14:33:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:14:43.935 14:33:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:43.935 14:33:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:43.935 14:33:52 -- setup/acl.sh@41 -- # setup reset 00:14:43.935 14:33:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:43.935 14:33:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:50.493 ************************************ 00:14:50.493 END TEST denied 00:14:50.493 ************************************ 00:14:50.493 00:14:50.493 real 0m7.691s 00:14:50.493 user 0m0.944s 00:14:50.493 sys 0m1.813s 00:14:50.493 14:33:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:50.493 14:33:58 -- common/autotest_common.sh@10 -- # set +x 00:14:50.493 14:33:58 -- setup/acl.sh@55 -- # run_test allowed allowed 00:14:50.493 14:33:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:50.493 14:33:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.493 14:33:58 -- common/autotest_common.sh@10 -- # set +x 00:14:50.493 ************************************ 00:14:50.493 START TEST allowed 00:14:50.493 ************************************ 00:14:50.493 14:33:58 -- common/autotest_common.sh@1111 -- # allowed 00:14:50.493 14:33:58 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:14:50.493 14:33:58 -- setup/acl.sh@45 -- # setup output config 00:14:50.493 14:33:58 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:14:50.493 14:33:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:50.493 14:33:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:51.060 lsblk: /dev/nvme3c3n1: not a block device 00:14:51.319 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:51.319 14:33:59 -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:51.319 14:33:59 -- setup/acl.sh@28 -- # local dev driver 00:14:51.319 14:33:59 -- setup/acl.sh@30 -- # for dev in "$@" 00:14:51.319 14:33:59 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:14:51.319 14:33:59 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:14:51.319 14:33:59 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:51.319 14:33:59 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:51.319 14:33:59 -- setup/acl.sh@30 -- # for dev in "$@" 00:14:51.319 14:33:59 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:14:51.319 14:33:59 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:14:51.319 14:33:59 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:51.319 14:33:59 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:51.319 14:33:59 -- setup/acl.sh@30 -- # for dev in "$@" 00:14:51.319 14:33:59 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:14:51.319 14:33:59 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:14:51.319 14:33:59 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:51.319 14:33:59 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:51.319 14:33:59 -- setup/acl.sh@48 -- # setup reset 00:14:51.319 14:33:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:51.319 14:33:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:52.693 ************************************ 00:14:52.693 END TEST allowed 00:14:52.693 ************************************ 00:14:52.693 00:14:52.693 real 0m2.324s 00:14:52.693 user 0m0.981s 00:14:52.693 sys 0m1.356s 00:14:52.693 14:34:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:52.693 14:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:52.693 ************************************ 00:14:52.693 END TEST acl 00:14:52.693 ************************************ 00:14:52.693 00:14:52.693 real 0m13.278s 00:14:52.693 user 0m3.345s 00:14:52.693 sys 0m5.034s 00:14:52.693 14:34:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:52.693 14:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:52.693 14:34:01 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:52.693 14:34:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:52.693 14:34:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.693 14:34:01 -- common/autotest_common.sh@10 -- # set +x 00:14:52.693 ************************************ 00:14:52.693 START TEST hugepages 00:14:52.693 ************************************ 00:14:52.693 14:34:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:52.693 * Looking for test storage... 00:14:52.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:52.693 14:34:01 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:14:52.693 14:34:01 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:14:52.693 14:34:01 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:14:52.693 14:34:01 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:14:52.693 14:34:01 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:14:52.693 14:34:01 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:14:52.693 14:34:01 -- setup/common.sh@17 -- # local get=Hugepagesize 00:14:52.693 14:34:01 -- setup/common.sh@18 -- # local node= 00:14:52.693 14:34:01 -- setup/common.sh@19 -- # local var val 00:14:52.693 14:34:01 -- setup/common.sh@20 -- # local mem_f mem 00:14:52.693 14:34:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:52.693 14:34:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:52.693 14:34:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:52.693 14:34:01 -- setup/common.sh@28 -- # mapfile -t mem 00:14:52.693 14:34:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.693 14:34:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 5852416 kB' 'MemAvailable: 7432364 kB' 'Buffers: 2436 kB' 'Cached: 1786956 kB' 'SwapCached: 0 kB' 'Active: 440440 kB' 'Inactive: 1454460 kB' 'Active(anon): 105328 kB' 'Inactive(anon): 10696 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443764 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 784 kB' 'Writeback: 0 kB' 'AnonPages: 105568 kB' 'Mapped: 51848 kB' 'Shmem: 10512 kB' 'KReclaimable: 75724 kB' 'Slab: 148464 kB' 'SReclaimable: 75724 kB' 'SUnreclaim: 72740 kB' 'KernelStack: 4680 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12407576 kB' 'Committed_AS: 321624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.693 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.693 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.694 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.694 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.695 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.695 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.695 14:34:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.695 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.695 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.695 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.695 14:34:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.695 14:34:01 -- setup/common.sh@32 -- # continue 00:14:52.695 14:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:14:52.695 14:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:14:52.695 14:34:01 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:52.695 14:34:01 -- setup/common.sh@33 -- # echo 2048 00:14:52.695 14:34:01 -- setup/common.sh@33 -- # return 0 00:14:52.695 14:34:01 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:14:52.695 14:34:01 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:14:52.695 14:34:01 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:14:52.695 14:34:01 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:14:52.695 14:34:01 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:14:52.695 14:34:01 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:14:52.695 14:34:01 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:14:52.695 14:34:01 -- setup/hugepages.sh@207 -- # get_nodes 00:14:52.695 14:34:01 -- setup/hugepages.sh@27 -- # local node 00:14:52.695 14:34:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:52.695 14:34:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:14:52.695 14:34:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:52.695 14:34:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:52.695 14:34:01 -- setup/hugepages.sh@208 -- # clear_hp 00:14:52.695 14:34:01 -- setup/hugepages.sh@37 -- # local node hp 00:14:52.695 14:34:01 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:52.695 14:34:01 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:52.695 14:34:01 -- setup/hugepages.sh@41 -- # echo 0 00:14:52.695 14:34:01 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:52.695 14:34:01 -- setup/hugepages.sh@41 -- # echo 0 00:14:52.695 14:34:01 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:52.695 14:34:01 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:52.695 14:34:01 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:14:52.695 14:34:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:52.695 14:34:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.695 14:34:01 -- common/autotest_common.sh@10 -- # set +x 00:14:52.953 ************************************ 00:14:52.953 START TEST default_setup 00:14:52.953 ************************************ 00:14:52.953 14:34:01 -- common/autotest_common.sh@1111 -- # default_setup 00:14:52.953 14:34:01 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:14:52.953 14:34:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:14:52.953 14:34:01 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:52.953 14:34:01 -- setup/hugepages.sh@51 -- # shift 00:14:52.953 14:34:01 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:52.953 14:34:01 -- setup/hugepages.sh@52 -- # local node_ids 00:14:52.953 14:34:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:52.953 14:34:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:52.953 14:34:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:52.953 14:34:01 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:52.953 14:34:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:52.953 14:34:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:52.953 14:34:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:52.953 14:34:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:52.953 14:34:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:52.953 14:34:01 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:52.953 14:34:01 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:52.953 14:34:01 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:52.953 14:34:01 -- setup/hugepages.sh@73 -- # return 0 00:14:52.953 14:34:01 -- setup/hugepages.sh@137 -- # setup output 00:14:52.953 14:34:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:52.953 14:34:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:53.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:53.802 lsblk: /dev/nvme3c3n1: not a block device 00:14:54.061 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:54.061 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:54.061 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:54.061 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:54.323 14:34:02 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:14:54.323 14:34:02 -- setup/hugepages.sh@89 -- # local node 00:14:54.323 14:34:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:54.323 14:34:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:54.323 14:34:02 -- setup/hugepages.sh@92 -- # local surp 00:14:54.323 14:34:02 -- setup/hugepages.sh@93 -- # local resv 00:14:54.323 14:34:02 -- setup/hugepages.sh@94 -- # local anon 00:14:54.323 14:34:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:54.323 14:34:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:54.323 14:34:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:54.323 14:34:02 -- setup/common.sh@18 -- # local node= 00:14:54.323 14:34:02 -- setup/common.sh@19 -- # local var val 00:14:54.323 14:34:02 -- setup/common.sh@20 -- # local mem_f mem 00:14:54.323 14:34:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:54.323 14:34:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:54.323 14:34:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:54.323 14:34:02 -- setup/common.sh@28 -- # mapfile -t mem 00:14:54.323 14:34:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7960544 kB' 'MemAvailable: 9540212 kB' 'Buffers: 2436 kB' 'Cached: 1786940 kB' 'SwapCached: 0 kB' 'Active: 457528 kB' 'Inactive: 1454452 kB' 'Active(anon): 122416 kB' 'Inactive(anon): 10672 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443780 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 936 kB' 'Writeback: 0 kB' 'AnonPages: 122704 kB' 'Mapped: 52016 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147616 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72480 kB' 'KernelStack: 4656 kB' 'PageTables: 3384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:54.323 14:34:02 -- setup/common.sh@33 -- # echo 0 00:14:54.323 14:34:02 -- setup/common.sh@33 -- # return 0 00:14:54.323 14:34:02 -- setup/hugepages.sh@97 -- # anon=0 00:14:54.323 14:34:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:54.323 14:34:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:54.323 14:34:02 -- setup/common.sh@18 -- # local node= 00:14:54.323 14:34:02 -- setup/common.sh@19 -- # local var val 00:14:54.323 14:34:02 -- setup/common.sh@20 -- # local mem_f mem 00:14:54.323 14:34:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:54.323 14:34:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:54.323 14:34:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:54.323 14:34:02 -- setup/common.sh@28 -- # mapfile -t mem 00:14:54.323 14:34:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:54.323 14:34:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7960544 kB' 'MemAvailable: 9540212 kB' 'Buffers: 2436 kB' 'Cached: 1786940 kB' 'SwapCached: 0 kB' 'Active: 457184 kB' 'Inactive: 1454452 kB' 'Active(anon): 122072 kB' 'Inactive(anon): 10672 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443780 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 936 kB' 'Writeback: 0 kB' 'AnonPages: 122612 kB' 'Mapped: 52016 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147612 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72476 kB' 'KernelStack: 4608 kB' 'PageTables: 3268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53248 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.323 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.323 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.324 14:34:02 -- setup/common.sh@33 -- # echo 0 00:14:54.324 14:34:02 -- setup/common.sh@33 -- # return 0 00:14:54.324 14:34:02 -- setup/hugepages.sh@99 -- # surp=0 00:14:54.324 14:34:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:54.324 14:34:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:54.324 14:34:02 -- setup/common.sh@18 -- # local node= 00:14:54.324 14:34:02 -- setup/common.sh@19 -- # local var val 00:14:54.324 14:34:02 -- setup/common.sh@20 -- # local mem_f mem 00:14:54.324 14:34:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:54.324 14:34:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:54.324 14:34:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:54.324 14:34:02 -- setup/common.sh@28 -- # mapfile -t mem 00:14:54.324 14:34:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7965508 kB' 'MemAvailable: 9545188 kB' 'Buffers: 2436 kB' 'Cached: 1786940 kB' 'SwapCached: 0 kB' 'Active: 457448 kB' 'Inactive: 1454464 kB' 'Active(anon): 122336 kB' 'Inactive(anon): 10672 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443792 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 936 kB' 'Writeback: 0 kB' 'AnonPages: 122620 kB' 'Mapped: 52016 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147628 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72492 kB' 'KernelStack: 4608 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.324 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.324 14:34:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:54.325 14:34:02 -- setup/common.sh@33 -- # echo 0 00:14:54.325 14:34:02 -- setup/common.sh@33 -- # return 0 00:14:54.325 nr_hugepages=1024 00:14:54.325 resv_hugepages=0 00:14:54.325 surplus_hugepages=0 00:14:54.325 anon_hugepages=0 00:14:54.325 14:34:02 -- setup/hugepages.sh@100 -- # resv=0 00:14:54.325 14:34:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:54.325 14:34:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:54.325 14:34:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:54.325 14:34:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:54.325 14:34:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:54.325 14:34:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:54.325 14:34:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:54.325 14:34:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:54.325 14:34:02 -- setup/common.sh@18 -- # local node= 00:14:54.325 14:34:02 -- setup/common.sh@19 -- # local var val 00:14:54.325 14:34:02 -- setup/common.sh@20 -- # local mem_f mem 00:14:54.325 14:34:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:54.325 14:34:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:54.325 14:34:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:54.325 14:34:02 -- setup/common.sh@28 -- # mapfile -t mem 00:14:54.325 14:34:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7965508 kB' 'MemAvailable: 9545188 kB' 'Buffers: 2436 kB' 'Cached: 1786940 kB' 'SwapCached: 0 kB' 'Active: 457352 kB' 'Inactive: 1454464 kB' 'Active(anon): 122240 kB' 'Inactive(anon): 10672 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443792 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 936 kB' 'Writeback: 0 kB' 'AnonPages: 122556 kB' 'Mapped: 52016 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147628 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72492 kB' 'KernelStack: 4592 kB' 'PageTables: 3228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.325 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.325 14:34:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:54.326 14:34:02 -- setup/common.sh@33 -- # echo 1024 00:14:54.326 14:34:02 -- setup/common.sh@33 -- # return 0 00:14:54.326 14:34:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:54.326 14:34:02 -- setup/hugepages.sh@112 -- # get_nodes 00:14:54.326 14:34:02 -- setup/hugepages.sh@27 -- # local node 00:14:54.326 14:34:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:54.326 14:34:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:54.326 14:34:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:54.326 14:34:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:54.326 14:34:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:54.326 14:34:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:54.326 14:34:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:54.326 14:34:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:54.326 14:34:02 -- setup/common.sh@18 -- # local node=0 00:14:54.326 14:34:02 -- setup/common.sh@19 -- # local var val 00:14:54.326 14:34:02 -- setup/common.sh@20 -- # local mem_f mem 00:14:54.326 14:34:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:54.326 14:34:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:54.326 14:34:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:54.326 14:34:02 -- setup/common.sh@28 -- # mapfile -t mem 00:14:54.326 14:34:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7965508 kB' 'MemUsed: 4266740 kB' 'SwapCached: 0 kB' 'Active: 457332 kB' 'Inactive: 1454464 kB' 'Active(anon): 122220 kB' 'Inactive(anon): 10672 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443792 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 936 kB' 'Writeback: 0 kB' 'FilePages: 1789376 kB' 'Mapped: 52016 kB' 'AnonPages: 122540 kB' 'Shmem: 10472 kB' 'KernelStack: 4592 kB' 'PageTables: 3228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75136 kB' 'Slab: 147628 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.326 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.326 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # continue 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:14:54.585 14:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:14:54.585 14:34:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:54.585 14:34:02 -- setup/common.sh@33 -- # echo 0 00:14:54.585 14:34:02 -- setup/common.sh@33 -- # return 0 00:14:54.585 14:34:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:54.585 14:34:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:54.585 14:34:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:54.585 14:34:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:54.585 14:34:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:54.585 node0=1024 expecting 1024 00:14:54.585 14:34:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:54.585 00:14:54.585 real 0m1.607s 00:14:54.585 user 0m0.660s 00:14:54.585 sys 0m0.862s 00:14:54.585 14:34:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:54.585 14:34:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.585 ************************************ 00:14:54.585 END TEST default_setup 00:14:54.585 ************************************ 00:14:54.585 14:34:02 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:14:54.585 14:34:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:54.585 14:34:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:54.585 14:34:02 -- common/autotest_common.sh@10 -- # set +x 00:14:54.585 ************************************ 00:14:54.585 START TEST per_node_1G_alloc 00:14:54.585 ************************************ 00:14:54.585 14:34:03 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:14:54.585 14:34:03 -- setup/hugepages.sh@143 -- # local IFS=, 00:14:54.585 14:34:03 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:14:54.585 14:34:03 -- setup/hugepages.sh@49 -- # local size=1048576 00:14:54.585 14:34:03 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:54.585 14:34:03 -- setup/hugepages.sh@51 -- # shift 00:14:54.585 14:34:03 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:54.585 14:34:03 -- setup/hugepages.sh@52 -- # local node_ids 00:14:54.585 14:34:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:54.585 14:34:03 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:54.585 14:34:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:54.585 14:34:03 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:54.585 14:34:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:54.585 14:34:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:54.585 14:34:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:54.585 14:34:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:54.585 14:34:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:54.585 14:34:03 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:54.585 14:34:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:54.585 14:34:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:14:54.585 14:34:03 -- setup/hugepages.sh@73 -- # return 0 00:14:54.585 14:34:03 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:14:54.585 14:34:03 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:14:54.585 14:34:03 -- setup/hugepages.sh@146 -- # setup output 00:14:54.585 14:34:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:54.585 14:34:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:54.843 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:55.103 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.103 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.103 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.103 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.103 14:34:03 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:14:55.103 14:34:03 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:14:55.103 14:34:03 -- setup/hugepages.sh@89 -- # local node 00:14:55.103 14:34:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:55.103 14:34:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:55.103 14:34:03 -- setup/hugepages.sh@92 -- # local surp 00:14:55.103 14:34:03 -- setup/hugepages.sh@93 -- # local resv 00:14:55.103 14:34:03 -- setup/hugepages.sh@94 -- # local anon 00:14:55.103 14:34:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:55.103 14:34:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:55.103 14:34:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:55.103 14:34:03 -- setup/common.sh@18 -- # local node= 00:14:55.103 14:34:03 -- setup/common.sh@19 -- # local var val 00:14:55.103 14:34:03 -- setup/common.sh@20 -- # local mem_f mem 00:14:55.103 14:34:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:55.103 14:34:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:55.103 14:34:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:55.103 14:34:03 -- setup/common.sh@28 -- # mapfile -t mem 00:14:55.103 14:34:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 9002132 kB' 'MemAvailable: 10581820 kB' 'Buffers: 2436 kB' 'Cached: 1786948 kB' 'SwapCached: 0 kB' 'Active: 457704 kB' 'Inactive: 1454484 kB' 'Active(anon): 122592 kB' 'Inactive(anon): 10684 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443800 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1076 kB' 'Writeback: 0 kB' 'AnonPages: 123112 kB' 'Mapped: 52088 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147628 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72492 kB' 'KernelStack: 4844 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980440 kB' 'Committed_AS: 340448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.103 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.103 14:34:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.104 14:34:03 -- setup/common.sh@33 -- # echo 0 00:14:55.104 14:34:03 -- setup/common.sh@33 -- # return 0 00:14:55.104 14:34:03 -- setup/hugepages.sh@97 -- # anon=0 00:14:55.104 14:34:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:55.104 14:34:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:55.104 14:34:03 -- setup/common.sh@18 -- # local node= 00:14:55.104 14:34:03 -- setup/common.sh@19 -- # local var val 00:14:55.104 14:34:03 -- setup/common.sh@20 -- # local mem_f mem 00:14:55.104 14:34:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:55.104 14:34:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:55.104 14:34:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:55.104 14:34:03 -- setup/common.sh@28 -- # mapfile -t mem 00:14:55.104 14:34:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 9002132 kB' 'MemAvailable: 10581820 kB' 'Buffers: 2436 kB' 'Cached: 1786948 kB' 'SwapCached: 0 kB' 'Active: 457180 kB' 'Inactive: 1454472 kB' 'Active(anon): 122068 kB' 'Inactive(anon): 10672 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443800 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1076 kB' 'Writeback: 0 kB' 'AnonPages: 122572 kB' 'Mapped: 51968 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147692 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72556 kB' 'KernelStack: 4688 kB' 'PageTables: 3440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980440 kB' 'Committed_AS: 340448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.104 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.104 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.105 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.105 14:34:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.106 14:34:03 -- setup/common.sh@33 -- # echo 0 00:14:55.106 14:34:03 -- setup/common.sh@33 -- # return 0 00:14:55.106 14:34:03 -- setup/hugepages.sh@99 -- # surp=0 00:14:55.106 14:34:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:55.106 14:34:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:55.106 14:34:03 -- setup/common.sh@18 -- # local node= 00:14:55.106 14:34:03 -- setup/common.sh@19 -- # local var val 00:14:55.106 14:34:03 -- setup/common.sh@20 -- # local mem_f mem 00:14:55.106 14:34:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:55.106 14:34:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:55.106 14:34:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:55.106 14:34:03 -- setup/common.sh@28 -- # mapfile -t mem 00:14:55.106 14:34:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 9001628 kB' 'MemAvailable: 10581316 kB' 'Buffers: 2436 kB' 'Cached: 1786948 kB' 'SwapCached: 0 kB' 'Active: 457036 kB' 'Inactive: 1454464 kB' 'Active(anon): 121924 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443800 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 122416 kB' 'Mapped: 51924 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147752 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72616 kB' 'KernelStack: 4624 kB' 'PageTables: 3300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980440 kB' 'Committed_AS: 340448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.106 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.106 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.368 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.368 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:55.369 14:34:03 -- setup/common.sh@33 -- # echo 0 00:14:55.369 14:34:03 -- setup/common.sh@33 -- # return 0 00:14:55.369 14:34:03 -- setup/hugepages.sh@100 -- # resv=0 00:14:55.369 14:34:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:55.369 nr_hugepages=512 00:14:55.369 14:34:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:55.369 resv_hugepages=0 00:14:55.369 14:34:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:55.369 surplus_hugepages=0 00:14:55.369 14:34:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:55.369 anon_hugepages=0 00:14:55.369 14:34:03 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:55.369 14:34:03 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:55.369 14:34:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:55.369 14:34:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:55.369 14:34:03 -- setup/common.sh@18 -- # local node= 00:14:55.369 14:34:03 -- setup/common.sh@19 -- # local var val 00:14:55.369 14:34:03 -- setup/common.sh@20 -- # local mem_f mem 00:14:55.369 14:34:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:55.369 14:34:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:55.369 14:34:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:55.369 14:34:03 -- setup/common.sh@28 -- # mapfile -t mem 00:14:55.369 14:34:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 9001628 kB' 'MemAvailable: 10581316 kB' 'Buffers: 2436 kB' 'Cached: 1786948 kB' 'SwapCached: 0 kB' 'Active: 457244 kB' 'Inactive: 1454464 kB' 'Active(anon): 122132 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443800 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 122620 kB' 'Mapped: 51924 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147752 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72616 kB' 'KernelStack: 4624 kB' 'PageTables: 3300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980440 kB' 'Committed_AS: 340448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53264 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.369 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.369 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:55.370 14:34:03 -- setup/common.sh@33 -- # echo 512 00:14:55.370 14:34:03 -- setup/common.sh@33 -- # return 0 00:14:55.370 14:34:03 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:55.370 14:34:03 -- setup/hugepages.sh@112 -- # get_nodes 00:14:55.370 14:34:03 -- setup/hugepages.sh@27 -- # local node 00:14:55.370 14:34:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:55.370 14:34:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:55.370 14:34:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:55.370 14:34:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:55.370 14:34:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:55.370 14:34:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:55.370 14:34:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:55.370 14:34:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:55.370 14:34:03 -- setup/common.sh@18 -- # local node=0 00:14:55.370 14:34:03 -- setup/common.sh@19 -- # local var val 00:14:55.370 14:34:03 -- setup/common.sh@20 -- # local mem_f mem 00:14:55.370 14:34:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:55.370 14:34:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:55.370 14:34:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:55.370 14:34:03 -- setup/common.sh@28 -- # mapfile -t mem 00:14:55.370 14:34:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 9001628 kB' 'MemUsed: 3230620 kB' 'SwapCached: 0 kB' 'Active: 457100 kB' 'Inactive: 1454464 kB' 'Active(anon): 121988 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443800 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'FilePages: 1789384 kB' 'Mapped: 51924 kB' 'AnonPages: 122216 kB' 'Shmem: 10472 kB' 'KernelStack: 4640 kB' 'PageTables: 3336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75136 kB' 'Slab: 147752 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.370 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.370 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # continue 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.371 14:34:03 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.371 14:34:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.371 14:34:03 -- setup/common.sh@33 -- # echo 0 00:14:55.371 14:34:03 -- setup/common.sh@33 -- # return 0 00:14:55.371 14:34:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:55.371 14:34:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:55.371 14:34:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:55.371 14:34:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:55.371 node0=512 expecting 512 00:14:55.371 14:34:03 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:55.371 14:34:03 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:55.371 00:14:55.371 real 0m0.780s 00:14:55.371 user 0m0.303s 00:14:55.371 sys 0m0.413s 00:14:55.371 14:34:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:55.371 ************************************ 00:14:55.371 END TEST per_node_1G_alloc 00:14:55.371 ************************************ 00:14:55.371 14:34:03 -- common/autotest_common.sh@10 -- # set +x 00:14:55.371 14:34:03 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:14:55.371 14:34:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:55.371 14:34:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:55.371 14:34:03 -- common/autotest_common.sh@10 -- # set +x 00:14:55.371 ************************************ 00:14:55.371 START TEST even_2G_alloc 00:14:55.371 ************************************ 00:14:55.371 14:34:03 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:14:55.371 14:34:03 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:14:55.371 14:34:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:14:55.371 14:34:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:55.371 14:34:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:55.371 14:34:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:55.372 14:34:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:55.372 14:34:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:55.372 14:34:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:55.372 14:34:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:55.372 14:34:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:55.372 14:34:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:55.372 14:34:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:55.372 14:34:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:55.372 14:34:03 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:55.372 14:34:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:55.372 14:34:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:14:55.372 14:34:03 -- setup/hugepages.sh@83 -- # : 0 00:14:55.372 14:34:03 -- setup/hugepages.sh@84 -- # : 0 00:14:55.372 14:34:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:55.372 14:34:03 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:14:55.372 14:34:03 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:14:55.372 14:34:03 -- setup/hugepages.sh@153 -- # setup output 00:14:55.372 14:34:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:55.372 14:34:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:55.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:55.939 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.939 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.939 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.939 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:55.939 14:34:04 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:14:55.939 14:34:04 -- setup/hugepages.sh@89 -- # local node 00:14:55.939 14:34:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:55.939 14:34:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:55.939 14:34:04 -- setup/hugepages.sh@92 -- # local surp 00:14:55.939 14:34:04 -- setup/hugepages.sh@93 -- # local resv 00:14:55.939 14:34:04 -- setup/hugepages.sh@94 -- # local anon 00:14:55.939 14:34:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:55.939 14:34:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:55.939 14:34:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:55.939 14:34:04 -- setup/common.sh@18 -- # local node= 00:14:55.939 14:34:04 -- setup/common.sh@19 -- # local var val 00:14:55.939 14:34:04 -- setup/common.sh@20 -- # local mem_f mem 00:14:55.939 14:34:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:55.939 14:34:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:55.939 14:34:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:55.939 14:34:04 -- setup/common.sh@28 -- # mapfile -t mem 00:14:55.939 14:34:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7957000 kB' 'MemAvailable: 9536692 kB' 'Buffers: 2436 kB' 'Cached: 1786952 kB' 'SwapCached: 0 kB' 'Active: 458404 kB' 'Inactive: 1454456 kB' 'Active(anon): 123292 kB' 'Inactive(anon): 10652 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 556 kB' 'Writeback: 0 kB' 'AnonPages: 123468 kB' 'Mapped: 51924 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147816 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72680 kB' 'KernelStack: 4864 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:55.939 14:34:04 -- setup/common.sh@33 -- # echo 0 00:14:55.939 14:34:04 -- setup/common.sh@33 -- # return 0 00:14:55.939 14:34:04 -- setup/hugepages.sh@97 -- # anon=0 00:14:55.939 14:34:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:55.939 14:34:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:55.939 14:34:04 -- setup/common.sh@18 -- # local node= 00:14:55.939 14:34:04 -- setup/common.sh@19 -- # local var val 00:14:55.939 14:34:04 -- setup/common.sh@20 -- # local mem_f mem 00:14:55.939 14:34:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:55.939 14:34:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:55.939 14:34:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:55.939 14:34:04 -- setup/common.sh@28 -- # mapfile -t mem 00:14:55.939 14:34:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7956760 kB' 'MemAvailable: 9536452 kB' 'Buffers: 2436 kB' 'Cached: 1786952 kB' 'SwapCached: 0 kB' 'Active: 457408 kB' 'Inactive: 1454456 kB' 'Active(anon): 122296 kB' 'Inactive(anon): 10652 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 556 kB' 'Writeback: 0 kB' 'AnonPages: 122508 kB' 'Mapped: 52200 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147832 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72696 kB' 'KernelStack: 4716 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 340392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.939 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.939 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:55.940 14:34:04 -- setup/common.sh@32 -- # continue 00:14:55.940 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.200 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.200 14:34:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.201 14:34:04 -- setup/common.sh@33 -- # echo 0 00:14:56.201 14:34:04 -- setup/common.sh@33 -- # return 0 00:14:56.201 14:34:04 -- setup/hugepages.sh@99 -- # surp=0 00:14:56.201 14:34:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:56.201 14:34:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:56.201 14:34:04 -- setup/common.sh@18 -- # local node= 00:14:56.201 14:34:04 -- setup/common.sh@19 -- # local var val 00:14:56.201 14:34:04 -- setup/common.sh@20 -- # local mem_f mem 00:14:56.201 14:34:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:56.201 14:34:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:56.201 14:34:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:56.201 14:34:04 -- setup/common.sh@28 -- # mapfile -t mem 00:14:56.201 14:34:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7956760 kB' 'MemAvailable: 9536452 kB' 'Buffers: 2436 kB' 'Cached: 1786952 kB' 'SwapCached: 0 kB' 'Active: 457164 kB' 'Inactive: 1454456 kB' 'Active(anon): 122052 kB' 'Inactive(anon): 10652 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 556 kB' 'Writeback: 0 kB' 'AnonPages: 122512 kB' 'Mapped: 51940 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147816 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72680 kB' 'KernelStack: 4640 kB' 'PageTables: 3272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.201 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.201 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.202 14:34:04 -- setup/common.sh@33 -- # echo 0 00:14:56.202 14:34:04 -- setup/common.sh@33 -- # return 0 00:14:56.202 nr_hugepages=1024 00:14:56.202 14:34:04 -- setup/hugepages.sh@100 -- # resv=0 00:14:56.202 14:34:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:56.202 resv_hugepages=0 00:14:56.202 surplus_hugepages=0 00:14:56.202 14:34:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:56.202 14:34:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:56.202 14:34:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:56.202 anon_hugepages=0 00:14:56.202 14:34:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:56.202 14:34:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:56.202 14:34:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:56.202 14:34:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:56.202 14:34:04 -- setup/common.sh@18 -- # local node= 00:14:56.202 14:34:04 -- setup/common.sh@19 -- # local var val 00:14:56.202 14:34:04 -- setup/common.sh@20 -- # local mem_f mem 00:14:56.202 14:34:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:56.202 14:34:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:56.202 14:34:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:56.202 14:34:04 -- setup/common.sh@28 -- # mapfile -t mem 00:14:56.202 14:34:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7956760 kB' 'MemAvailable: 9536452 kB' 'Buffers: 2436 kB' 'Cached: 1786952 kB' 'SwapCached: 0 kB' 'Active: 457492 kB' 'Inactive: 1454456 kB' 'Active(anon): 122380 kB' 'Inactive(anon): 10652 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 556 kB' 'Writeback: 0 kB' 'AnonPages: 122876 kB' 'Mapped: 51940 kB' 'Shmem: 10472 kB' 'KReclaimable: 75136 kB' 'Slab: 147816 kB' 'SReclaimable: 75136 kB' 'SUnreclaim: 72680 kB' 'KernelStack: 4656 kB' 'PageTables: 3308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.202 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.202 14:34:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.203 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.203 14:34:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.203 14:34:04 -- setup/common.sh@33 -- # echo 1024 00:14:56.203 14:34:04 -- setup/common.sh@33 -- # return 0 00:14:56.203 14:34:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:56.203 14:34:04 -- setup/hugepages.sh@112 -- # get_nodes 00:14:56.203 14:34:04 -- setup/hugepages.sh@27 -- # local node 00:14:56.204 14:34:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:56.204 14:34:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:56.204 14:34:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:56.204 14:34:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:56.204 14:34:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:56.204 14:34:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:56.204 14:34:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:56.204 14:34:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:56.204 14:34:04 -- setup/common.sh@18 -- # local node=0 00:14:56.204 14:34:04 -- setup/common.sh@19 -- # local var val 00:14:56.204 14:34:04 -- setup/common.sh@20 -- # local mem_f mem 00:14:56.204 14:34:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:56.204 14:34:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:56.204 14:34:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:56.204 14:34:04 -- setup/common.sh@28 -- # mapfile -t mem 00:14:56.204 14:34:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7956760 kB' 'MemUsed: 4275488 kB' 'SwapCached: 0 kB' 'Active: 457260 kB' 'Inactive: 1454472 kB' 'Active(anon): 122148 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443808 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 604 kB' 'Writeback: 0 kB' 'FilePages: 1789392 kB' 'Mapped: 51932 kB' 'AnonPages: 122400 kB' 'Shmem: 10472 kB' 'KernelStack: 4608 kB' 'PageTables: 3256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75112 kB' 'Slab: 147868 kB' 'SReclaimable: 75112 kB' 'SUnreclaim: 72756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.204 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.204 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # continue 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.205 14:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.205 14:34:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.205 14:34:04 -- setup/common.sh@33 -- # echo 0 00:14:56.205 14:34:04 -- setup/common.sh@33 -- # return 0 00:14:56.205 14:34:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:56.205 14:34:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:56.205 14:34:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:56.205 14:34:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:56.205 14:34:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:56.205 node0=1024 expecting 1024 00:14:56.205 14:34:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:56.205 00:14:56.205 real 0m0.781s 00:14:56.205 user 0m0.310s 00:14:56.205 sys 0m0.417s 00:14:56.205 14:34:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:56.205 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:14:56.205 ************************************ 00:14:56.205 END TEST even_2G_alloc 00:14:56.205 ************************************ 00:14:56.205 14:34:04 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:14:56.205 14:34:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:56.205 14:34:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.205 14:34:04 -- common/autotest_common.sh@10 -- # set +x 00:14:56.463 ************************************ 00:14:56.463 START TEST odd_alloc 00:14:56.463 ************************************ 00:14:56.463 14:34:04 -- common/autotest_common.sh@1111 -- # odd_alloc 00:14:56.463 14:34:04 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:14:56.463 14:34:04 -- setup/hugepages.sh@49 -- # local size=2098176 00:14:56.463 14:34:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:56.463 14:34:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:56.463 14:34:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:14:56.463 14:34:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:56.463 14:34:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:56.463 14:34:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:56.463 14:34:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:14:56.463 14:34:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:56.463 14:34:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:56.463 14:34:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:56.463 14:34:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:56.463 14:34:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:56.463 14:34:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:56.463 14:34:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:14:56.463 14:34:04 -- setup/hugepages.sh@83 -- # : 0 00:14:56.463 14:34:04 -- setup/hugepages.sh@84 -- # : 0 00:14:56.463 14:34:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:56.463 14:34:04 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:14:56.463 14:34:04 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:14:56.463 14:34:04 -- setup/hugepages.sh@160 -- # setup output 00:14:56.463 14:34:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:56.463 14:34:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:56.721 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:56.721 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:56.721 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:56.721 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:56.721 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:56.982 14:34:05 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:14:56.982 14:34:05 -- setup/hugepages.sh@89 -- # local node 00:14:56.982 14:34:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:56.982 14:34:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:56.982 14:34:05 -- setup/hugepages.sh@92 -- # local surp 00:14:56.982 14:34:05 -- setup/hugepages.sh@93 -- # local resv 00:14:56.982 14:34:05 -- setup/hugepages.sh@94 -- # local anon 00:14:56.982 14:34:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:56.982 14:34:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:56.982 14:34:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:56.982 14:34:05 -- setup/common.sh@18 -- # local node= 00:14:56.982 14:34:05 -- setup/common.sh@19 -- # local var val 00:14:56.982 14:34:05 -- setup/common.sh@20 -- # local mem_f mem 00:14:56.982 14:34:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:56.982 14:34:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:56.982 14:34:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:56.982 14:34:05 -- setup/common.sh@28 -- # mapfile -t mem 00:14:56.982 14:34:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:56.982 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.982 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7953780 kB' 'MemAvailable: 9533460 kB' 'Buffers: 2436 kB' 'Cached: 1786952 kB' 'SwapCached: 0 kB' 'Active: 457468 kB' 'Inactive: 1454480 kB' 'Active(anon): 122356 kB' 'Inactive(anon): 10676 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 768 kB' 'Writeback: 0 kB' 'AnonPages: 122624 kB' 'Mapped: 52016 kB' 'Shmem: 10472 kB' 'KReclaimable: 75112 kB' 'Slab: 147868 kB' 'SReclaimable: 75112 kB' 'SUnreclaim: 72756 kB' 'KernelStack: 4584 kB' 'PageTables: 3240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455128 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.983 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.983 14:34:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:56.983 14:34:05 -- setup/common.sh@33 -- # echo 0 00:14:56.984 14:34:05 -- setup/common.sh@33 -- # return 0 00:14:56.984 14:34:05 -- setup/hugepages.sh@97 -- # anon=0 00:14:56.984 14:34:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:56.984 14:34:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:56.984 14:34:05 -- setup/common.sh@18 -- # local node= 00:14:56.984 14:34:05 -- setup/common.sh@19 -- # local var val 00:14:56.984 14:34:05 -- setup/common.sh@20 -- # local mem_f mem 00:14:56.984 14:34:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:56.984 14:34:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:56.984 14:34:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:56.984 14:34:05 -- setup/common.sh@28 -- # mapfile -t mem 00:14:56.984 14:34:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7953780 kB' 'MemAvailable: 9533460 kB' 'Buffers: 2436 kB' 'Cached: 1786952 kB' 'SwapCached: 0 kB' 'Active: 457408 kB' 'Inactive: 1454472 kB' 'Active(anon): 122296 kB' 'Inactive(anon): 10668 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 768 kB' 'Writeback: 0 kB' 'AnonPages: 122544 kB' 'Mapped: 51904 kB' 'Shmem: 10472 kB' 'KReclaimable: 75112 kB' 'Slab: 147868 kB' 'SReclaimable: 75112 kB' 'SUnreclaim: 72756 kB' 'KernelStack: 4616 kB' 'PageTables: 3308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455128 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.984 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.984 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:56.985 14:34:05 -- setup/common.sh@33 -- # echo 0 00:14:56.985 14:34:05 -- setup/common.sh@33 -- # return 0 00:14:56.985 14:34:05 -- setup/hugepages.sh@99 -- # surp=0 00:14:56.985 14:34:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:56.985 14:34:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:56.985 14:34:05 -- setup/common.sh@18 -- # local node= 00:14:56.985 14:34:05 -- setup/common.sh@19 -- # local var val 00:14:56.985 14:34:05 -- setup/common.sh@20 -- # local mem_f mem 00:14:56.985 14:34:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:56.985 14:34:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:56.985 14:34:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:56.985 14:34:05 -- setup/common.sh@28 -- # mapfile -t mem 00:14:56.985 14:34:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7953780 kB' 'MemAvailable: 9533460 kB' 'Buffers: 2436 kB' 'Cached: 1786952 kB' 'SwapCached: 0 kB' 'Active: 457436 kB' 'Inactive: 1454472 kB' 'Active(anon): 122324 kB' 'Inactive(anon): 10668 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 768 kB' 'Writeback: 0 kB' 'AnonPages: 122572 kB' 'Mapped: 51904 kB' 'Shmem: 10472 kB' 'KReclaimable: 75112 kB' 'Slab: 147868 kB' 'SReclaimable: 75112 kB' 'SUnreclaim: 72756 kB' 'KernelStack: 4616 kB' 'PageTables: 3308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455128 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.985 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.985 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.986 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.986 14:34:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:56.986 14:34:05 -- setup/common.sh@33 -- # echo 0 00:14:56.986 14:34:05 -- setup/common.sh@33 -- # return 0 00:14:56.986 nr_hugepages=1025 00:14:56.986 resv_hugepages=0 00:14:56.986 surplus_hugepages=0 00:14:56.986 anon_hugepages=0 00:14:56.986 14:34:05 -- setup/hugepages.sh@100 -- # resv=0 00:14:56.986 14:34:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:14:56.986 14:34:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:56.986 14:34:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:56.986 14:34:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:56.986 14:34:05 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:56.986 14:34:05 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:14:56.986 14:34:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:56.986 14:34:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:56.986 14:34:05 -- setup/common.sh@18 -- # local node= 00:14:56.987 14:34:05 -- setup/common.sh@19 -- # local var val 00:14:56.987 14:34:05 -- setup/common.sh@20 -- # local mem_f mem 00:14:56.987 14:34:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:56.987 14:34:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:56.987 14:34:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:56.987 14:34:05 -- setup/common.sh@28 -- # mapfile -t mem 00:14:56.987 14:34:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7953780 kB' 'MemAvailable: 9533460 kB' 'Buffers: 2436 kB' 'Cached: 1786952 kB' 'SwapCached: 0 kB' 'Active: 457444 kB' 'Inactive: 1454472 kB' 'Active(anon): 122332 kB' 'Inactive(anon): 10668 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 768 kB' 'Writeback: 0 kB' 'AnonPages: 122580 kB' 'Mapped: 51904 kB' 'Shmem: 10472 kB' 'KReclaimable: 75112 kB' 'Slab: 147868 kB' 'SReclaimable: 75112 kB' 'SUnreclaim: 72756 kB' 'KernelStack: 4632 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455128 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.987 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.987 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:56.988 14:34:05 -- setup/common.sh@32 -- # continue 00:14:56.988 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.247 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.247 14:34:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.247 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.247 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.247 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.247 14:34:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.247 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.247 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.247 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.247 14:34:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.247 14:34:05 -- setup/common.sh@33 -- # echo 1025 00:14:57.247 14:34:05 -- setup/common.sh@33 -- # return 0 00:14:57.247 14:34:05 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:57.247 14:34:05 -- setup/hugepages.sh@112 -- # get_nodes 00:14:57.247 14:34:05 -- setup/hugepages.sh@27 -- # local node 00:14:57.247 14:34:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:57.247 14:34:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:14:57.247 14:34:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:57.247 14:34:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:57.247 14:34:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:57.247 14:34:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:57.247 14:34:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:57.247 14:34:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:57.247 14:34:05 -- setup/common.sh@18 -- # local node=0 00:14:57.247 14:34:05 -- setup/common.sh@19 -- # local var val 00:14:57.247 14:34:05 -- setup/common.sh@20 -- # local mem_f mem 00:14:57.247 14:34:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:57.247 14:34:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:57.247 14:34:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:57.247 14:34:05 -- setup/common.sh@28 -- # mapfile -t mem 00:14:57.247 14:34:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:57.247 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.247 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7953780 kB' 'MemUsed: 4278468 kB' 'SwapCached: 0 kB' 'Active: 457432 kB' 'Inactive: 1454480 kB' 'Active(anon): 122320 kB' 'Inactive(anon): 10676 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443804 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 768 kB' 'Writeback: 0 kB' 'FilePages: 1789388 kB' 'Mapped: 52132 kB' 'AnonPages: 122592 kB' 'Shmem: 10472 kB' 'KernelStack: 4600 kB' 'PageTables: 3268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75112 kB' 'Slab: 147868 kB' 'SReclaimable: 75112 kB' 'SUnreclaim: 72756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.248 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.248 14:34:05 -- setup/common.sh@32 -- # continue 00:14:57.249 14:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.249 14:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.249 14:34:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.249 14:34:05 -- setup/common.sh@33 -- # echo 0 00:14:57.249 14:34:05 -- setup/common.sh@33 -- # return 0 00:14:57.249 14:34:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:57.249 14:34:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:57.249 14:34:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:57.249 node0=1025 expecting 1025 00:14:57.249 ************************************ 00:14:57.249 END TEST odd_alloc 00:14:57.249 ************************************ 00:14:57.249 14:34:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:57.249 14:34:05 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:14:57.249 14:34:05 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:14:57.249 00:14:57.249 real 0m0.785s 00:14:57.249 user 0m0.325s 00:14:57.249 sys 0m0.388s 00:14:57.249 14:34:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:57.249 14:34:05 -- common/autotest_common.sh@10 -- # set +x 00:14:57.249 14:34:05 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:14:57.249 14:34:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:57.249 14:34:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:57.249 14:34:05 -- common/autotest_common.sh@10 -- # set +x 00:14:57.249 ************************************ 00:14:57.249 START TEST custom_alloc 00:14:57.249 ************************************ 00:14:57.249 14:34:05 -- common/autotest_common.sh@1111 -- # custom_alloc 00:14:57.249 14:34:05 -- setup/hugepages.sh@167 -- # local IFS=, 00:14:57.249 14:34:05 -- setup/hugepages.sh@169 -- # local node 00:14:57.249 14:34:05 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:14:57.249 14:34:05 -- setup/hugepages.sh@170 -- # local nodes_hp 00:14:57.249 14:34:05 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:14:57.249 14:34:05 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:14:57.249 14:34:05 -- setup/hugepages.sh@49 -- # local size=1048576 00:14:57.249 14:34:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:57.249 14:34:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:57.249 14:34:05 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:57.249 14:34:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:57.249 14:34:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:57.249 14:34:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:57.249 14:34:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:57.249 14:34:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:57.249 14:34:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:57.249 14:34:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:57.249 14:34:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:57.249 14:34:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:57.249 14:34:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:57.249 14:34:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:14:57.249 14:34:05 -- setup/hugepages.sh@83 -- # : 0 00:14:57.249 14:34:05 -- setup/hugepages.sh@84 -- # : 0 00:14:57.249 14:34:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:57.249 14:34:05 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:14:57.249 14:34:05 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:14:57.249 14:34:05 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:14:57.249 14:34:05 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:14:57.249 14:34:05 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:14:57.249 14:34:05 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:14:57.249 14:34:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:57.249 14:34:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:57.249 14:34:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:57.249 14:34:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:57.249 14:34:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:57.249 14:34:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:57.249 14:34:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:57.249 14:34:05 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:14:57.249 14:34:05 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:14:57.249 14:34:05 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:14:57.249 14:34:05 -- setup/hugepages.sh@78 -- # return 0 00:14:57.249 14:34:05 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:14:57.249 14:34:05 -- setup/hugepages.sh@187 -- # setup output 00:14:57.249 14:34:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:57.249 14:34:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:57.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:57.768 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:57.768 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:57.768 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:57.768 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:57.768 14:34:06 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:14:57.768 14:34:06 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:14:57.768 14:34:06 -- setup/hugepages.sh@89 -- # local node 00:14:57.768 14:34:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:57.768 14:34:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:57.768 14:34:06 -- setup/hugepages.sh@92 -- # local surp 00:14:57.768 14:34:06 -- setup/hugepages.sh@93 -- # local resv 00:14:57.768 14:34:06 -- setup/hugepages.sh@94 -- # local anon 00:14:57.768 14:34:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:57.768 14:34:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:57.768 14:34:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:57.768 14:34:06 -- setup/common.sh@18 -- # local node= 00:14:57.768 14:34:06 -- setup/common.sh@19 -- # local var val 00:14:57.768 14:34:06 -- setup/common.sh@20 -- # local mem_f mem 00:14:57.768 14:34:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:57.768 14:34:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:57.768 14:34:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:57.768 14:34:06 -- setup/common.sh@28 -- # mapfile -t mem 00:14:57.769 14:34:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 9003272 kB' 'MemAvailable: 10582940 kB' 'Buffers: 2436 kB' 'Cached: 1786956 kB' 'SwapCached: 0 kB' 'Active: 458300 kB' 'Inactive: 1454480 kB' 'Active(anon): 123188 kB' 'Inactive(anon): 10672 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443808 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 940 kB' 'Writeback: 0 kB' 'AnonPages: 123252 kB' 'Mapped: 52076 kB' 'Shmem: 10472 kB' 'KReclaimable: 75080 kB' 'Slab: 147800 kB' 'SReclaimable: 75080 kB' 'SUnreclaim: 72720 kB' 'KernelStack: 4612 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980440 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.769 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.769 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:57.770 14:34:06 -- setup/common.sh@33 -- # echo 0 00:14:57.770 14:34:06 -- setup/common.sh@33 -- # return 0 00:14:57.770 14:34:06 -- setup/hugepages.sh@97 -- # anon=0 00:14:57.770 14:34:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:57.770 14:34:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:57.770 14:34:06 -- setup/common.sh@18 -- # local node= 00:14:57.770 14:34:06 -- setup/common.sh@19 -- # local var val 00:14:57.770 14:34:06 -- setup/common.sh@20 -- # local mem_f mem 00:14:57.770 14:34:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:57.770 14:34:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:57.770 14:34:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:57.770 14:34:06 -- setup/common.sh@28 -- # mapfile -t mem 00:14:57.770 14:34:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 9003020 kB' 'MemAvailable: 10582688 kB' 'Buffers: 2436 kB' 'Cached: 1786956 kB' 'SwapCached: 0 kB' 'Active: 457160 kB' 'Inactive: 1454472 kB' 'Active(anon): 122048 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443808 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 936 kB' 'Writeback: 0 kB' 'AnonPages: 122540 kB' 'Mapped: 52000 kB' 'Shmem: 10472 kB' 'KReclaimable: 75080 kB' 'Slab: 147892 kB' 'SReclaimable: 75080 kB' 'SUnreclaim: 72812 kB' 'KernelStack: 4624 kB' 'PageTables: 3300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980440 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53264 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.770 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.770 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:57.771 14:34:06 -- setup/common.sh@33 -- # echo 0 00:14:57.771 14:34:06 -- setup/common.sh@33 -- # return 0 00:14:57.771 14:34:06 -- setup/hugepages.sh@99 -- # surp=0 00:14:57.771 14:34:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:57.771 14:34:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:57.771 14:34:06 -- setup/common.sh@18 -- # local node= 00:14:57.771 14:34:06 -- setup/common.sh@19 -- # local var val 00:14:57.771 14:34:06 -- setup/common.sh@20 -- # local mem_f mem 00:14:57.771 14:34:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:57.771 14:34:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:57.771 14:34:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:57.771 14:34:06 -- setup/common.sh@28 -- # mapfile -t mem 00:14:57.771 14:34:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 9003020 kB' 'MemAvailable: 10582688 kB' 'Buffers: 2436 kB' 'Cached: 1786956 kB' 'SwapCached: 0 kB' 'Active: 457120 kB' 'Inactive: 1454472 kB' 'Active(anon): 122008 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443808 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 936 kB' 'Writeback: 0 kB' 'AnonPages: 122496 kB' 'Mapped: 51952 kB' 'Shmem: 10472 kB' 'KReclaimable: 75080 kB' 'Slab: 147892 kB' 'SReclaimable: 75080 kB' 'SUnreclaim: 72812 kB' 'KernelStack: 4608 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980440 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.771 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.771 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.772 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.772 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:57.773 14:34:06 -- setup/common.sh@33 -- # echo 0 00:14:57.773 14:34:06 -- setup/common.sh@33 -- # return 0 00:14:57.773 14:34:06 -- setup/hugepages.sh@100 -- # resv=0 00:14:57.773 nr_hugepages=512 00:14:57.773 resv_hugepages=0 00:14:57.773 14:34:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:57.773 14:34:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:57.773 surplus_hugepages=0 00:14:57.773 14:34:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:57.773 anon_hugepages=0 00:14:57.773 14:34:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:57.773 14:34:06 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:57.773 14:34:06 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:57.773 14:34:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:57.773 14:34:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:57.773 14:34:06 -- setup/common.sh@18 -- # local node= 00:14:57.773 14:34:06 -- setup/common.sh@19 -- # local var val 00:14:57.773 14:34:06 -- setup/common.sh@20 -- # local mem_f mem 00:14:57.773 14:34:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:57.773 14:34:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:57.773 14:34:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:57.773 14:34:06 -- setup/common.sh@28 -- # mapfile -t mem 00:14:57.773 14:34:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 9003020 kB' 'MemAvailable: 10582688 kB' 'Buffers: 2436 kB' 'Cached: 1786956 kB' 'SwapCached: 0 kB' 'Active: 457068 kB' 'Inactive: 1454472 kB' 'Active(anon): 121956 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443808 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 936 kB' 'Writeback: 0 kB' 'AnonPages: 122440 kB' 'Mapped: 51952 kB' 'Shmem: 10472 kB' 'KReclaimable: 75080 kB' 'Slab: 147892 kB' 'SReclaimable: 75080 kB' 'SUnreclaim: 72812 kB' 'KernelStack: 4608 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980440 kB' 'Committed_AS: 340756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.773 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.773 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:57.774 14:34:06 -- setup/common.sh@32 -- # continue 00:14:57.774 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.033 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.033 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.034 14:34:06 -- setup/common.sh@33 -- # echo 512 00:14:58.034 14:34:06 -- setup/common.sh@33 -- # return 0 00:14:58.034 14:34:06 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:58.034 14:34:06 -- setup/hugepages.sh@112 -- # get_nodes 00:14:58.034 14:34:06 -- setup/hugepages.sh@27 -- # local node 00:14:58.034 14:34:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:58.034 14:34:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:58.034 14:34:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:58.034 14:34:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:58.034 14:34:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:58.034 14:34:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:58.034 14:34:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:58.034 14:34:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:58.034 14:34:06 -- setup/common.sh@18 -- # local node=0 00:14:58.034 14:34:06 -- setup/common.sh@19 -- # local var val 00:14:58.034 14:34:06 -- setup/common.sh@20 -- # local mem_f mem 00:14:58.034 14:34:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:58.034 14:34:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:58.034 14:34:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:58.034 14:34:06 -- setup/common.sh@28 -- # mapfile -t mem 00:14:58.034 14:34:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 9003020 kB' 'MemUsed: 3229228 kB' 'SwapCached: 0 kB' 'Active: 457064 kB' 'Inactive: 1454472 kB' 'Active(anon): 121952 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443808 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 936 kB' 'Writeback: 0 kB' 'FilePages: 1789392 kB' 'Mapped: 51952 kB' 'AnonPages: 122440 kB' 'Shmem: 10472 kB' 'KernelStack: 4608 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75080 kB' 'Slab: 147892 kB' 'SReclaimable: 75080 kB' 'SUnreclaim: 72812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.034 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.034 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # continue 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.035 14:34:06 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.035 14:34:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.035 14:34:06 -- setup/common.sh@33 -- # echo 0 00:14:58.035 14:34:06 -- setup/common.sh@33 -- # return 0 00:14:58.035 14:34:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:58.035 14:34:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:58.035 14:34:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:58.035 14:34:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:58.035 14:34:06 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:58.035 node0=512 expecting 512 00:14:58.035 14:34:06 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:58.035 00:14:58.035 real 0m0.685s 00:14:58.035 user 0m0.266s 00:14:58.035 sys 0m0.377s 00:14:58.035 14:34:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:58.035 14:34:06 -- common/autotest_common.sh@10 -- # set +x 00:14:58.035 ************************************ 00:14:58.035 END TEST custom_alloc 00:14:58.035 ************************************ 00:14:58.035 14:34:06 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:14:58.035 14:34:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:58.035 14:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.035 14:34:06 -- common/autotest_common.sh@10 -- # set +x 00:14:58.035 ************************************ 00:14:58.035 START TEST no_shrink_alloc 00:14:58.035 ************************************ 00:14:58.035 14:34:06 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:14:58.035 14:34:06 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:14:58.035 14:34:06 -- setup/hugepages.sh@49 -- # local size=2097152 00:14:58.035 14:34:06 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:58.035 14:34:06 -- setup/hugepages.sh@51 -- # shift 00:14:58.035 14:34:06 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:58.035 14:34:06 -- setup/hugepages.sh@52 -- # local node_ids 00:14:58.035 14:34:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:58.035 14:34:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:58.035 14:34:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:58.035 14:34:06 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:58.035 14:34:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:58.035 14:34:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:58.035 14:34:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:58.035 14:34:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:58.035 14:34:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:58.035 14:34:06 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:58.035 14:34:06 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:58.035 14:34:06 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:58.035 14:34:06 -- setup/hugepages.sh@73 -- # return 0 00:14:58.035 14:34:06 -- setup/hugepages.sh@198 -- # setup output 00:14:58.035 14:34:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:58.035 14:34:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:58.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:58.603 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:58.603 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:58.603 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:58.603 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:58.603 14:34:07 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:14:58.603 14:34:07 -- setup/hugepages.sh@89 -- # local node 00:14:58.603 14:34:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:58.603 14:34:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:58.603 14:34:07 -- setup/hugepages.sh@92 -- # local surp 00:14:58.603 14:34:07 -- setup/hugepages.sh@93 -- # local resv 00:14:58.603 14:34:07 -- setup/hugepages.sh@94 -- # local anon 00:14:58.603 14:34:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:58.603 14:34:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:58.603 14:34:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:58.603 14:34:07 -- setup/common.sh@18 -- # local node= 00:14:58.603 14:34:07 -- setup/common.sh@19 -- # local var val 00:14:58.603 14:34:07 -- setup/common.sh@20 -- # local mem_f mem 00:14:58.603 14:34:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:58.603 14:34:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:58.603 14:34:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:58.603 14:34:07 -- setup/common.sh@28 -- # mapfile -t mem 00:14:58.603 14:34:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7960464 kB' 'MemAvailable: 9540132 kB' 'Buffers: 2436 kB' 'Cached: 1786964 kB' 'SwapCached: 0 kB' 'Active: 454644 kB' 'Inactive: 1454488 kB' 'Active(anon): 119532 kB' 'Inactive(anon): 10672 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1100 kB' 'Writeback: 0 kB' 'AnonPages: 120028 kB' 'Mapped: 51212 kB' 'Shmem: 10472 kB' 'KReclaimable: 75064 kB' 'Slab: 147872 kB' 'SReclaimable: 75064 kB' 'SUnreclaim: 72808 kB' 'KernelStack: 4584 kB' 'PageTables: 3024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 330500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.603 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.603 14:34:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:58.604 14:34:07 -- setup/common.sh@33 -- # echo 0 00:14:58.604 14:34:07 -- setup/common.sh@33 -- # return 0 00:14:58.604 14:34:07 -- setup/hugepages.sh@97 -- # anon=0 00:14:58.604 14:34:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:58.604 14:34:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:58.604 14:34:07 -- setup/common.sh@18 -- # local node= 00:14:58.604 14:34:07 -- setup/common.sh@19 -- # local var val 00:14:58.604 14:34:07 -- setup/common.sh@20 -- # local mem_f mem 00:14:58.604 14:34:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:58.604 14:34:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:58.604 14:34:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:58.604 14:34:07 -- setup/common.sh@28 -- # mapfile -t mem 00:14:58.604 14:34:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7960464 kB' 'MemAvailable: 9540132 kB' 'Buffers: 2436 kB' 'Cached: 1786964 kB' 'SwapCached: 0 kB' 'Active: 454600 kB' 'Inactive: 1454480 kB' 'Active(anon): 119488 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1100 kB' 'Writeback: 0 kB' 'AnonPages: 119716 kB' 'Mapped: 50968 kB' 'Shmem: 10472 kB' 'KReclaimable: 75064 kB' 'Slab: 147860 kB' 'SReclaimable: 75064 kB' 'SUnreclaim: 72796 kB' 'KernelStack: 4544 kB' 'PageTables: 2996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 330500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53200 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.604 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.604 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.605 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.605 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.866 14:34:07 -- setup/common.sh@33 -- # echo 0 00:14:58.866 14:34:07 -- setup/common.sh@33 -- # return 0 00:14:58.866 14:34:07 -- setup/hugepages.sh@99 -- # surp=0 00:14:58.866 14:34:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:58.866 14:34:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:58.866 14:34:07 -- setup/common.sh@18 -- # local node= 00:14:58.866 14:34:07 -- setup/common.sh@19 -- # local var val 00:14:58.866 14:34:07 -- setup/common.sh@20 -- # local mem_f mem 00:14:58.866 14:34:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:58.866 14:34:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:58.866 14:34:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:58.866 14:34:07 -- setup/common.sh@28 -- # mapfile -t mem 00:14:58.866 14:34:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7960984 kB' 'MemAvailable: 9540652 kB' 'Buffers: 2436 kB' 'Cached: 1786964 kB' 'SwapCached: 0 kB' 'Active: 454376 kB' 'Inactive: 1454480 kB' 'Active(anon): 119264 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1100 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 50968 kB' 'Shmem: 10472 kB' 'KReclaimable: 75064 kB' 'Slab: 147860 kB' 'SReclaimable: 75064 kB' 'SUnreclaim: 72796 kB' 'KernelStack: 4560 kB' 'PageTables: 3032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 330500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53200 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.866 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.866 14:34:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.867 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.867 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:58.867 14:34:07 -- setup/common.sh@33 -- # echo 0 00:14:58.867 14:34:07 -- setup/common.sh@33 -- # return 0 00:14:58.867 14:34:07 -- setup/hugepages.sh@100 -- # resv=0 00:14:58.867 14:34:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:58.867 nr_hugepages=1024 00:14:58.867 resv_hugepages=0 00:14:58.867 14:34:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:58.867 surplus_hugepages=0 00:14:58.867 14:34:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:58.867 anon_hugepages=0 00:14:58.867 14:34:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:58.867 14:34:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:58.867 14:34:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:58.867 14:34:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:58.867 14:34:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:58.867 14:34:07 -- setup/common.sh@18 -- # local node= 00:14:58.867 14:34:07 -- setup/common.sh@19 -- # local var val 00:14:58.868 14:34:07 -- setup/common.sh@20 -- # local mem_f mem 00:14:58.868 14:34:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:58.868 14:34:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:58.868 14:34:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:58.868 14:34:07 -- setup/common.sh@28 -- # mapfile -t mem 00:14:58.868 14:34:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7961220 kB' 'MemAvailable: 9540888 kB' 'Buffers: 2436 kB' 'Cached: 1786964 kB' 'SwapCached: 0 kB' 'Active: 454576 kB' 'Inactive: 1454480 kB' 'Active(anon): 119464 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1100 kB' 'Writeback: 0 kB' 'AnonPages: 119696 kB' 'Mapped: 50968 kB' 'Shmem: 10472 kB' 'KReclaimable: 75064 kB' 'Slab: 147860 kB' 'SReclaimable: 75064 kB' 'SUnreclaim: 72796 kB' 'KernelStack: 4544 kB' 'PageTables: 2996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 330500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53184 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.868 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.868 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:58.869 14:34:07 -- setup/common.sh@33 -- # echo 1024 00:14:58.869 14:34:07 -- setup/common.sh@33 -- # return 0 00:14:58.869 14:34:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:58.869 14:34:07 -- setup/hugepages.sh@112 -- # get_nodes 00:14:58.869 14:34:07 -- setup/hugepages.sh@27 -- # local node 00:14:58.869 14:34:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:58.869 14:34:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:58.869 14:34:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:58.869 14:34:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:58.869 14:34:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:58.869 14:34:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:58.869 14:34:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:58.869 14:34:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:58.869 14:34:07 -- setup/common.sh@18 -- # local node=0 00:14:58.869 14:34:07 -- setup/common.sh@19 -- # local var val 00:14:58.869 14:34:07 -- setup/common.sh@20 -- # local mem_f mem 00:14:58.869 14:34:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:58.869 14:34:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:58.869 14:34:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:58.869 14:34:07 -- setup/common.sh@28 -- # mapfile -t mem 00:14:58.869 14:34:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7961220 kB' 'MemUsed: 4271028 kB' 'SwapCached: 0 kB' 'Active: 454556 kB' 'Inactive: 1454480 kB' 'Active(anon): 119444 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 1100 kB' 'Writeback: 0 kB' 'FilePages: 1789400 kB' 'Mapped: 50968 kB' 'AnonPages: 119676 kB' 'Shmem: 10472 kB' 'KernelStack: 4528 kB' 'PageTables: 2960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75064 kB' 'Slab: 147860 kB' 'SReclaimable: 75064 kB' 'SUnreclaim: 72796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.869 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.869 14:34:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # continue 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:58.870 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:58.870 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:58.870 14:34:07 -- setup/common.sh@33 -- # echo 0 00:14:58.870 14:34:07 -- setup/common.sh@33 -- # return 0 00:14:58.870 14:34:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:58.870 14:34:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:58.870 14:34:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:58.870 14:34:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:58.870 node0=1024 expecting 1024 00:14:58.870 14:34:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:58.870 14:34:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:58.870 14:34:07 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:14:58.870 14:34:07 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:14:58.870 14:34:07 -- setup/hugepages.sh@202 -- # setup output 00:14:58.870 14:34:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:58.870 14:34:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:59.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:59.390 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:59.390 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:59.390 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:59.390 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:59.390 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:14:59.390 14:34:07 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:14:59.390 14:34:07 -- setup/hugepages.sh@89 -- # local node 00:14:59.390 14:34:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:59.390 14:34:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:59.390 14:34:07 -- setup/hugepages.sh@92 -- # local surp 00:14:59.390 14:34:07 -- setup/hugepages.sh@93 -- # local resv 00:14:59.390 14:34:07 -- setup/hugepages.sh@94 -- # local anon 00:14:59.390 14:34:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:59.390 14:34:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:59.390 14:34:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:59.390 14:34:07 -- setup/common.sh@18 -- # local node= 00:14:59.390 14:34:07 -- setup/common.sh@19 -- # local var val 00:14:59.390 14:34:07 -- setup/common.sh@20 -- # local mem_f mem 00:14:59.390 14:34:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:59.390 14:34:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:59.390 14:34:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:59.390 14:34:07 -- setup/common.sh@28 -- # mapfile -t mem 00:14:59.390 14:34:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7960660 kB' 'MemAvailable: 9540328 kB' 'Buffers: 2436 kB' 'Cached: 1786964 kB' 'SwapCached: 0 kB' 'Active: 455388 kB' 'Inactive: 1454488 kB' 'Active(anon): 120276 kB' 'Inactive(anon): 10672 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1100 kB' 'Writeback: 0 kB' 'AnonPages: 120552 kB' 'Mapped: 51580 kB' 'Shmem: 10472 kB' 'KReclaimable: 75064 kB' 'Slab: 147832 kB' 'SReclaimable: 75064 kB' 'SUnreclaim: 72768 kB' 'KernelStack: 4684 kB' 'PageTables: 3120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 330500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53248 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.390 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.390 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:59.391 14:34:07 -- setup/common.sh@33 -- # echo 0 00:14:59.391 14:34:07 -- setup/common.sh@33 -- # return 0 00:14:59.391 14:34:07 -- setup/hugepages.sh@97 -- # anon=0 00:14:59.391 14:34:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:59.391 14:34:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:59.391 14:34:07 -- setup/common.sh@18 -- # local node= 00:14:59.391 14:34:07 -- setup/common.sh@19 -- # local var val 00:14:59.391 14:34:07 -- setup/common.sh@20 -- # local mem_f mem 00:14:59.391 14:34:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:59.391 14:34:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:59.391 14:34:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:59.391 14:34:07 -- setup/common.sh@28 -- # mapfile -t mem 00:14:59.391 14:34:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7960660 kB' 'MemAvailable: 9540328 kB' 'Buffers: 2436 kB' 'Cached: 1786964 kB' 'SwapCached: 0 kB' 'Active: 454648 kB' 'Inactive: 1454488 kB' 'Active(anon): 119536 kB' 'Inactive(anon): 10672 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1100 kB' 'Writeback: 0 kB' 'AnonPages: 119788 kB' 'Mapped: 51296 kB' 'Shmem: 10472 kB' 'KReclaimable: 75064 kB' 'Slab: 147824 kB' 'SReclaimable: 75064 kB' 'SUnreclaim: 72760 kB' 'KernelStack: 4576 kB' 'PageTables: 2972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 330500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53200 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.391 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.391 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.392 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.392 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.392 14:34:07 -- setup/common.sh@33 -- # echo 0 00:14:59.392 14:34:07 -- setup/common.sh@33 -- # return 0 00:14:59.392 14:34:07 -- setup/hugepages.sh@99 -- # surp=0 00:14:59.392 14:34:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:59.392 14:34:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:59.393 14:34:07 -- setup/common.sh@18 -- # local node= 00:14:59.393 14:34:07 -- setup/common.sh@19 -- # local var val 00:14:59.393 14:34:07 -- setup/common.sh@20 -- # local mem_f mem 00:14:59.393 14:34:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:59.393 14:34:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:59.393 14:34:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:59.393 14:34:07 -- setup/common.sh@28 -- # mapfile -t mem 00:14:59.393 14:34:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7960156 kB' 'MemAvailable: 9539824 kB' 'Buffers: 2436 kB' 'Cached: 1786964 kB' 'SwapCached: 0 kB' 'Active: 454532 kB' 'Inactive: 1454480 kB' 'Active(anon): 119420 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1100 kB' 'Writeback: 0 kB' 'AnonPages: 119652 kB' 'Mapped: 51028 kB' 'Shmem: 10472 kB' 'KReclaimable: 75064 kB' 'Slab: 147824 kB' 'SReclaimable: 75064 kB' 'SUnreclaim: 72760 kB' 'KernelStack: 4528 kB' 'PageTables: 2868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 330500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53184 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.393 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.393 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:59.394 14:34:07 -- setup/common.sh@33 -- # echo 0 00:14:59.394 14:34:07 -- setup/common.sh@33 -- # return 0 00:14:59.394 14:34:07 -- setup/hugepages.sh@100 -- # resv=0 00:14:59.394 14:34:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:59.394 nr_hugepages=1024 00:14:59.394 resv_hugepages=0 00:14:59.394 14:34:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:59.394 surplus_hugepages=0 00:14:59.394 14:34:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:59.394 anon_hugepages=0 00:14:59.394 14:34:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:59.394 14:34:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:59.394 14:34:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:59.394 14:34:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:59.394 14:34:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:59.394 14:34:07 -- setup/common.sh@18 -- # local node= 00:14:59.394 14:34:07 -- setup/common.sh@19 -- # local var val 00:14:59.394 14:34:07 -- setup/common.sh@20 -- # local mem_f mem 00:14:59.394 14:34:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:59.394 14:34:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:59.394 14:34:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:59.394 14:34:07 -- setup/common.sh@28 -- # mapfile -t mem 00:14:59.394 14:34:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7960156 kB' 'MemAvailable: 9539824 kB' 'Buffers: 2436 kB' 'Cached: 1786964 kB' 'SwapCached: 0 kB' 'Active: 454532 kB' 'Inactive: 1454480 kB' 'Active(anon): 119420 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1100 kB' 'Writeback: 0 kB' 'AnonPages: 119912 kB' 'Mapped: 51028 kB' 'Shmem: 10472 kB' 'KReclaimable: 75064 kB' 'Slab: 147824 kB' 'SReclaimable: 75064 kB' 'SUnreclaim: 72760 kB' 'KernelStack: 4528 kB' 'PageTables: 2868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456152 kB' 'Committed_AS: 330500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53200 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 7163904 kB' 'DirectMap1G: 7340032 kB' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.394 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.394 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.395 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.395 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.396 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.396 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.654 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.654 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.654 14:34:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.654 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.654 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.654 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.654 14:34:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.654 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.654 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.654 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.654 14:34:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:59.654 14:34:07 -- setup/common.sh@33 -- # echo 1024 00:14:59.654 14:34:07 -- setup/common.sh@33 -- # return 0 00:14:59.654 14:34:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:59.654 14:34:07 -- setup/hugepages.sh@112 -- # get_nodes 00:14:59.654 14:34:07 -- setup/hugepages.sh@27 -- # local node 00:14:59.654 14:34:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:59.654 14:34:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:59.654 14:34:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:59.654 14:34:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:59.654 14:34:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:59.654 14:34:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:59.654 14:34:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:59.654 14:34:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:59.654 14:34:07 -- setup/common.sh@18 -- # local node=0 00:14:59.654 14:34:07 -- setup/common.sh@19 -- # local var val 00:14:59.654 14:34:07 -- setup/common.sh@20 -- # local mem_f mem 00:14:59.654 14:34:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:59.654 14:34:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:59.654 14:34:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:59.654 14:34:07 -- setup/common.sh@28 -- # mapfile -t mem 00:14:59.654 14:34:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:59.654 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232248 kB' 'MemFree: 7960156 kB' 'MemUsed: 4272092 kB' 'SwapCached: 0 kB' 'Active: 454344 kB' 'Inactive: 1454480 kB' 'Active(anon): 119232 kB' 'Inactive(anon): 10664 kB' 'Active(file): 335112 kB' 'Inactive(file): 1443816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 1100 kB' 'Writeback: 0 kB' 'FilePages: 1789400 kB' 'Mapped: 50968 kB' 'AnonPages: 119720 kB' 'Shmem: 10472 kB' 'KernelStack: 4512 kB' 'PageTables: 2924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75064 kB' 'Slab: 147824 kB' 'SReclaimable: 75064 kB' 'SUnreclaim: 72760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:59.655 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:07 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:07 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:07 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # continue 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.655 14:34:08 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.655 14:34:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:59.655 14:34:08 -- setup/common.sh@33 -- # echo 0 00:14:59.655 14:34:08 -- setup/common.sh@33 -- # return 0 00:14:59.655 14:34:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:59.655 14:34:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:59.655 14:34:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:59.655 14:34:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:59.655 node0=1024 expecting 1024 00:14:59.655 14:34:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:59.655 14:34:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:59.655 00:14:59.655 real 0m1.461s 00:14:59.655 user 0m0.626s 00:14:59.655 sys 0m0.878s 00:14:59.655 14:34:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:59.655 14:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:59.655 ************************************ 00:14:59.655 END TEST no_shrink_alloc 00:14:59.655 ************************************ 00:14:59.655 14:34:08 -- setup/hugepages.sh@217 -- # clear_hp 00:14:59.655 14:34:08 -- setup/hugepages.sh@37 -- # local node hp 00:14:59.656 14:34:08 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:59.656 14:34:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:59.656 14:34:08 -- setup/hugepages.sh@41 -- # echo 0 00:14:59.656 14:34:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:59.656 14:34:08 -- setup/hugepages.sh@41 -- # echo 0 00:14:59.656 14:34:08 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:59.656 14:34:08 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:59.656 00:14:59.656 real 0m6.972s 00:14:59.656 user 0m2.787s 00:14:59.656 sys 0m3.809s 00:14:59.656 14:34:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:59.656 ************************************ 00:14:59.656 END TEST hugepages 00:14:59.656 ************************************ 00:14:59.656 14:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:59.656 14:34:08 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:59.656 14:34:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:59.656 14:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.656 14:34:08 -- common/autotest_common.sh@10 -- # set +x 00:14:59.656 ************************************ 00:14:59.656 START TEST driver 00:14:59.656 ************************************ 00:14:59.656 14:34:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:59.656 * Looking for test storage... 00:14:59.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:59.656 14:34:08 -- setup/driver.sh@68 -- # setup reset 00:14:59.656 14:34:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:59.656 14:34:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:06.213 14:34:14 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:15:06.213 14:34:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:06.213 14:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.213 14:34:14 -- common/autotest_common.sh@10 -- # set +x 00:15:06.213 ************************************ 00:15:06.213 START TEST guess_driver 00:15:06.213 ************************************ 00:15:06.213 14:34:14 -- common/autotest_common.sh@1111 -- # guess_driver 00:15:06.213 14:34:14 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:15:06.213 14:34:14 -- setup/driver.sh@47 -- # local fail=0 00:15:06.213 14:34:14 -- setup/driver.sh@49 -- # pick_driver 00:15:06.213 14:34:14 -- setup/driver.sh@36 -- # vfio 00:15:06.213 14:34:14 -- setup/driver.sh@21 -- # local iommu_grups 00:15:06.213 14:34:14 -- setup/driver.sh@22 -- # local unsafe_vfio 00:15:06.213 14:34:14 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:15:06.213 14:34:14 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:15:06.213 14:34:14 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:15:06.213 14:34:14 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:15:06.213 14:34:14 -- setup/driver.sh@32 -- # return 1 00:15:06.213 14:34:14 -- setup/driver.sh@38 -- # uio 00:15:06.213 14:34:14 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:15:06.213 14:34:14 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:15:06.213 14:34:14 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:15:06.213 14:34:14 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:15:06.213 14:34:14 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:15:06.213 insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:15:06.213 14:34:14 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:15:06.213 14:34:14 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:15:06.213 14:34:14 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:15:06.213 Looking for driver=uio_pci_generic 00:15:06.213 14:34:14 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:15:06.213 14:34:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:06.213 14:34:14 -- setup/driver.sh@45 -- # setup output config 00:15:06.213 14:34:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:06.213 14:34:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:06.470 14:34:14 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:15:06.470 14:34:14 -- setup/driver.sh@58 -- # continue 00:15:06.470 14:34:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:07.034 lsblk: /dev/nvme3c3n1: not a block device 00:15:07.292 14:34:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:15:07.292 14:34:15 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:15:07.292 14:34:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:07.292 14:34:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:15:07.292 14:34:15 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:15:07.292 14:34:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:07.292 14:34:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:15:07.292 14:34:15 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:15:07.292 14:34:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:07.292 14:34:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:15:07.292 14:34:15 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:15:07.292 14:34:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:07.550 14:34:15 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:15:07.550 14:34:15 -- setup/driver.sh@65 -- # setup reset 00:15:07.550 14:34:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:07.550 14:34:15 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:14.104 00:15:14.104 real 0m7.653s 00:15:14.104 user 0m0.928s 00:15:14.104 sys 0m1.814s 00:15:14.104 14:34:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:14.104 14:34:21 -- common/autotest_common.sh@10 -- # set +x 00:15:14.104 ************************************ 00:15:14.104 END TEST guess_driver 00:15:14.104 ************************************ 00:15:14.104 00:15:14.104 real 0m13.849s 00:15:14.104 user 0m1.315s 00:15:14.104 sys 0m2.728s 00:15:14.104 14:34:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:14.104 14:34:22 -- common/autotest_common.sh@10 -- # set +x 00:15:14.104 ************************************ 00:15:14.104 END TEST driver 00:15:14.104 ************************************ 00:15:14.104 14:34:22 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:15:14.104 14:34:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:14.104 14:34:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:14.104 14:34:22 -- common/autotest_common.sh@10 -- # set +x 00:15:14.104 ************************************ 00:15:14.104 START TEST devices 00:15:14.104 ************************************ 00:15:14.104 14:34:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:15:14.104 * Looking for test storage... 00:15:14.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:14.104 14:34:22 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:15:14.104 14:34:22 -- setup/devices.sh@192 -- # setup reset 00:15:14.104 14:34:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:14.104 14:34:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:15.038 14:34:23 -- setup/devices.sh@194 -- # get_zoned_devs 00:15:15.038 14:34:23 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:15.038 14:34:23 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:15.038 14:34:23 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:15.038 14:34:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:15.038 14:34:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:15.038 14:34:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:15.038 14:34:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:15.038 14:34:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:15:15.038 14:34:23 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:15.038 14:34:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:15.038 14:34:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:15:15.038 14:34:23 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:15:15.038 14:34:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:15.038 14:34:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:15:15.038 14:34:23 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:15:15.038 14:34:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:15.038 14:34:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:15:15.038 14:34:23 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:15:15.038 14:34:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:15.038 14:34:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:15:15.038 14:34:23 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:15:15.038 14:34:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:15.038 14:34:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:15:15.038 14:34:23 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:15:15.038 14:34:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:15.038 14:34:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:15.038 14:34:23 -- setup/devices.sh@196 -- # blocks=() 00:15:15.038 14:34:23 -- setup/devices.sh@196 -- # declare -a blocks 00:15:15.038 14:34:23 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:15:15.038 14:34:23 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:15:15.038 14:34:23 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:15:15.038 14:34:23 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:15.038 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:15:15.038 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme0 00:15:15.038 14:34:23 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:15:15.038 14:34:23 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:15:15.038 14:34:23 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:15:15.038 14:34:23 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:15.038 14:34:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:15.038 No valid GPT data, bailing 00:15:15.038 14:34:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:15.038 14:34:23 -- scripts/common.sh@391 -- # pt= 00:15:15.038 14:34:23 -- scripts/common.sh@392 -- # return 1 00:15:15.038 14:34:23 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:15:15.038 14:34:23 -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:15.038 14:34:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:15.038 14:34:23 -- setup/common.sh@80 -- # echo 5368709120 00:15:15.038 14:34:23 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:15:15.038 14:34:23 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:15.038 14:34:23 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:15:15.038 14:34:23 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:15.038 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:15:15.038 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme1 00:15:15.038 14:34:23 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:15:15.038 14:34:23 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:15:15.038 14:34:23 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:15:15.038 14:34:23 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:15:15.038 14:34:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:15.296 No valid GPT data, bailing 00:15:15.296 14:34:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:15.296 14:34:23 -- scripts/common.sh@391 -- # pt= 00:15:15.296 14:34:23 -- scripts/common.sh@392 -- # return 1 00:15:15.296 14:34:23 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:15:15.296 14:34:23 -- setup/common.sh@76 -- # local dev=nvme1n1 00:15:15.296 14:34:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:15.296 14:34:23 -- setup/common.sh@80 -- # echo 6343335936 00:15:15.296 14:34:23 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:15:15.296 14:34:23 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:15.296 14:34:23 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:15:15.296 14:34:23 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:15.296 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:15:15.296 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme2 00:15:15.296 14:34:23 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:15:15.296 14:34:23 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:15:15.296 14:34:23 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:15:15.296 14:34:23 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:15:15.296 14:34:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:15:15.296 No valid GPT data, bailing 00:15:15.296 14:34:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:15:15.296 14:34:23 -- scripts/common.sh@391 -- # pt= 00:15:15.296 14:34:23 -- scripts/common.sh@392 -- # return 1 00:15:15.296 14:34:23 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:15:15.296 14:34:23 -- setup/common.sh@76 -- # local dev=nvme2n1 00:15:15.296 14:34:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:15:15.296 14:34:23 -- setup/common.sh@80 -- # echo 4294967296 00:15:15.296 14:34:23 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:15:15.296 14:34:23 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:15.296 14:34:23 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:15:15.296 14:34:23 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:15.296 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:15:15.296 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme2 00:15:15.296 14:34:23 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:15:15.296 14:34:23 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:15:15.296 14:34:23 -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:15:15.296 14:34:23 -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:15:15.296 14:34:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:15:15.296 No valid GPT data, bailing 00:15:15.296 14:34:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:15:15.296 14:34:23 -- scripts/common.sh@391 -- # pt= 00:15:15.296 14:34:23 -- scripts/common.sh@392 -- # return 1 00:15:15.296 14:34:23 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:15:15.296 14:34:23 -- setup/common.sh@76 -- # local dev=nvme2n2 00:15:15.296 14:34:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:15:15.296 14:34:23 -- setup/common.sh@80 -- # echo 4294967296 00:15:15.296 14:34:23 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:15:15.296 14:34:23 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:15.296 14:34:23 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:15:15.296 14:34:23 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:15.296 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:15:15.296 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme2 00:15:15.296 14:34:23 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:15:15.296 14:34:23 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:15:15.296 14:34:23 -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:15:15.297 14:34:23 -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:15:15.297 14:34:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:15:15.554 No valid GPT data, bailing 00:15:15.554 14:34:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:15:15.554 14:34:23 -- scripts/common.sh@391 -- # pt= 00:15:15.554 14:34:23 -- scripts/common.sh@392 -- # return 1 00:15:15.554 14:34:23 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:15:15.554 14:34:23 -- setup/common.sh@76 -- # local dev=nvme2n3 00:15:15.554 14:34:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:15:15.554 14:34:23 -- setup/common.sh@80 -- # echo 4294967296 00:15:15.554 14:34:23 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:15:15.554 14:34:23 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:15.554 14:34:23 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:15:15.554 14:34:23 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:15.554 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:15:15.554 14:34:23 -- setup/devices.sh@201 -- # ctrl=nvme3 00:15:15.554 14:34:23 -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:15:15.554 14:34:23 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:15:15.554 14:34:23 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:15:15.554 14:34:23 -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:15:15.554 14:34:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:15:15.554 No valid GPT data, bailing 00:15:15.554 14:34:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:15:15.554 14:34:24 -- scripts/common.sh@391 -- # pt= 00:15:15.554 14:34:24 -- scripts/common.sh@392 -- # return 1 00:15:15.554 14:34:24 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:15:15.554 14:34:24 -- setup/common.sh@76 -- # local dev=nvme3n1 00:15:15.554 14:34:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:15:15.554 14:34:24 -- setup/common.sh@80 -- # echo 1073741824 00:15:15.554 14:34:24 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:15:15.554 14:34:24 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:15:15.554 14:34:24 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:15:15.554 14:34:24 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:15:15.554 14:34:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:15.554 14:34:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.555 14:34:24 -- common/autotest_common.sh@10 -- # set +x 00:15:15.555 ************************************ 00:15:15.555 START TEST nvme_mount 00:15:15.555 ************************************ 00:15:15.555 14:34:24 -- common/autotest_common.sh@1111 -- # nvme_mount 00:15:15.555 14:34:24 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:15:15.555 14:34:24 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:15:15.555 14:34:24 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:15.555 14:34:24 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:15.555 14:34:24 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:15:15.555 14:34:24 -- setup/common.sh@39 -- # local disk=nvme0n1 00:15:15.555 14:34:24 -- setup/common.sh@40 -- # local part_no=1 00:15:15.555 14:34:24 -- setup/common.sh@41 -- # local size=1073741824 00:15:15.555 14:34:24 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:15:15.555 14:34:24 -- setup/common.sh@44 -- # parts=() 00:15:15.555 14:34:24 -- setup/common.sh@44 -- # local parts 00:15:15.555 14:34:24 -- setup/common.sh@46 -- # (( part = 1 )) 00:15:15.555 14:34:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:15.555 14:34:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:15.555 14:34:24 -- setup/common.sh@46 -- # (( part++ )) 00:15:15.555 14:34:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:15.555 14:34:24 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:15:15.555 14:34:24 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:15:15.555 14:34:24 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:15:16.949 Creating new GPT entries in memory. 00:15:16.949 GPT data structures destroyed! You may now partition the disk using fdisk or 00:15:16.949 other utilities. 00:15:16.949 14:34:25 -- setup/common.sh@57 -- # (( part = 1 )) 00:15:16.949 14:34:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:16.949 14:34:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:16.949 14:34:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:16.949 14:34:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:15:17.883 Creating new GPT entries in memory. 00:15:17.883 The operation has completed successfully. 00:15:17.883 14:34:26 -- setup/common.sh@57 -- # (( part++ )) 00:15:17.883 14:34:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:17.883 14:34:26 -- setup/common.sh@62 -- # wait 59013 00:15:17.883 14:34:26 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:17.883 14:34:26 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:15:17.883 14:34:26 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:17.883 14:34:26 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:15:17.883 14:34:26 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:15:17.883 14:34:26 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:17.883 14:34:26 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:17.883 14:34:26 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:17.883 14:34:26 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:15:17.883 14:34:26 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:17.883 14:34:26 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:17.883 14:34:26 -- setup/devices.sh@53 -- # local found=0 00:15:17.883 14:34:26 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:17.883 14:34:26 -- setup/devices.sh@56 -- # : 00:15:17.883 14:34:26 -- setup/devices.sh@59 -- # local pci status 00:15:17.883 14:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:17.883 14:34:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:17.883 14:34:26 -- setup/devices.sh@47 -- # setup output config 00:15:17.883 14:34:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:17.883 14:34:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:18.141 14:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:18.141 14:34:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:15:18.141 14:34:26 -- setup/devices.sh@63 -- # found=1 00:15:18.141 14:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:18.141 14:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:18.141 14:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:18.141 14:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:18.141 14:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:18.398 14:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:18.398 14:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:18.399 14:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:18.399 14:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:18.656 14:34:27 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:18.656 14:34:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:18.656 lsblk: /dev/nvme3c3n1: not a block device 00:15:18.915 14:34:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:18.915 14:34:27 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:15:18.915 14:34:27 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:18.915 14:34:27 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:18.915 14:34:27 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:18.915 14:34:27 -- setup/devices.sh@110 -- # cleanup_nvme 00:15:18.915 14:34:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:18.915 14:34:27 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:18.915 14:34:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:18.915 14:34:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:15:18.915 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:15:18.915 14:34:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:15:18.915 14:34:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:15:19.173 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:15:19.173 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:15:19.173 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:15:19.173 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:15:19.173 14:34:27 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:15:19.173 14:34:27 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:15:19.173 14:34:27 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:19.173 14:34:27 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:15:19.173 14:34:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:15:19.173 14:34:27 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:19.173 14:34:27 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:19.173 14:34:27 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:19.173 14:34:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:15:19.174 14:34:27 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:19.174 14:34:27 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:19.174 14:34:27 -- setup/devices.sh@53 -- # local found=0 00:15:19.174 14:34:27 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:19.174 14:34:27 -- setup/devices.sh@56 -- # : 00:15:19.174 14:34:27 -- setup/devices.sh@59 -- # local pci status 00:15:19.174 14:34:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:19.174 14:34:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:19.174 14:34:27 -- setup/devices.sh@47 -- # setup output config 00:15:19.174 14:34:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:19.174 14:34:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:19.431 14:34:27 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:19.431 14:34:27 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:15:19.431 14:34:27 -- setup/devices.sh@63 -- # found=1 00:15:19.431 14:34:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:19.431 14:34:27 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:19.431 14:34:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:19.688 14:34:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:19.688 14:34:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:19.688 14:34:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:19.688 14:34:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:19.688 14:34:28 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:19.688 14:34:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:20.253 14:34:28 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:20.253 14:34:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:20.253 lsblk: /dev/nvme3c3n1: not a block device 00:15:20.253 14:34:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:20.253 14:34:28 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:15:20.253 14:34:28 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:20.253 14:34:28 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:20.253 14:34:28 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:20.253 14:34:28 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:20.253 14:34:28 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:15:20.253 14:34:28 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:20.253 14:34:28 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:15:20.253 14:34:28 -- setup/devices.sh@50 -- # local mount_point= 00:15:20.253 14:34:28 -- setup/devices.sh@51 -- # local test_file= 00:15:20.253 14:34:28 -- setup/devices.sh@53 -- # local found=0 00:15:20.253 14:34:28 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:15:20.253 14:34:28 -- setup/devices.sh@59 -- # local pci status 00:15:20.253 14:34:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:20.253 14:34:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:20.253 14:34:28 -- setup/devices.sh@47 -- # setup output config 00:15:20.253 14:34:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:20.253 14:34:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:20.819 14:34:29 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:20.819 14:34:29 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:15:20.819 14:34:29 -- setup/devices.sh@63 -- # found=1 00:15:20.819 14:34:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:20.819 14:34:29 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:20.819 14:34:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:20.819 14:34:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:20.819 14:34:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:21.076 14:34:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:21.076 14:34:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:21.076 14:34:29 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:21.076 14:34:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:21.334 14:34:29 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:21.334 14:34:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:21.591 lsblk: /dev/nvme3c3n1: not a block device 00:15:21.591 14:34:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:21.591 14:34:30 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:15:21.591 14:34:30 -- setup/devices.sh@68 -- # return 0 00:15:21.591 14:34:30 -- setup/devices.sh@128 -- # cleanup_nvme 00:15:21.591 14:34:30 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:21.591 14:34:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:21.591 14:34:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:15:21.591 14:34:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:15:21.591 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:15:21.591 00:15:21.591 real 0m5.965s 00:15:21.591 user 0m1.643s 00:15:21.591 sys 0m2.024s 00:15:21.591 14:34:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:21.591 14:34:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.591 ************************************ 00:15:21.591 END TEST nvme_mount 00:15:21.591 ************************************ 00:15:21.591 14:34:30 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:15:21.591 14:34:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:21.591 14:34:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:21.591 14:34:30 -- common/autotest_common.sh@10 -- # set +x 00:15:21.849 ************************************ 00:15:21.849 START TEST dm_mount 00:15:21.849 ************************************ 00:15:21.849 14:34:30 -- common/autotest_common.sh@1111 -- # dm_mount 00:15:21.849 14:34:30 -- setup/devices.sh@144 -- # pv=nvme0n1 00:15:21.849 14:34:30 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:15:21.849 14:34:30 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:15:21.849 14:34:30 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:15:21.849 14:34:30 -- setup/common.sh@39 -- # local disk=nvme0n1 00:15:21.849 14:34:30 -- setup/common.sh@40 -- # local part_no=2 00:15:21.849 14:34:30 -- setup/common.sh@41 -- # local size=1073741824 00:15:21.849 14:34:30 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:15:21.849 14:34:30 -- setup/common.sh@44 -- # parts=() 00:15:21.849 14:34:30 -- setup/common.sh@44 -- # local parts 00:15:21.849 14:34:30 -- setup/common.sh@46 -- # (( part = 1 )) 00:15:21.849 14:34:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:21.849 14:34:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:21.849 14:34:30 -- setup/common.sh@46 -- # (( part++ )) 00:15:21.849 14:34:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:21.849 14:34:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:21.849 14:34:30 -- setup/common.sh@46 -- # (( part++ )) 00:15:21.849 14:34:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:21.849 14:34:30 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:15:21.849 14:34:30 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:15:21.849 14:34:30 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:15:22.785 Creating new GPT entries in memory. 00:15:22.785 GPT data structures destroyed! You may now partition the disk using fdisk or 00:15:22.785 other utilities. 00:15:22.785 14:34:31 -- setup/common.sh@57 -- # (( part = 1 )) 00:15:22.785 14:34:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:22.785 14:34:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:22.785 14:34:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:22.785 14:34:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:15:23.720 Creating new GPT entries in memory. 00:15:23.720 The operation has completed successfully. 00:15:23.720 14:34:32 -- setup/common.sh@57 -- # (( part++ )) 00:15:23.720 14:34:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:23.720 14:34:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:23.720 14:34:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:23.720 14:34:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:15:25.152 The operation has completed successfully. 00:15:25.152 14:34:33 -- setup/common.sh@57 -- # (( part++ )) 00:15:25.152 14:34:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:25.152 14:34:33 -- setup/common.sh@62 -- # wait 59743 00:15:25.152 14:34:33 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:15:25.152 14:34:33 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:25.152 14:34:33 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:25.152 14:34:33 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:15:25.152 14:34:33 -- setup/devices.sh@160 -- # for t in {1..5} 00:15:25.152 14:34:33 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:15:25.152 14:34:33 -- setup/devices.sh@161 -- # break 00:15:25.152 14:34:33 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:15:25.152 14:34:33 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:15:25.152 14:34:33 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:15:25.152 14:34:33 -- setup/devices.sh@166 -- # dm=dm-0 00:15:25.152 14:34:33 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:15:25.152 14:34:33 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:15:25.152 14:34:33 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:25.152 14:34:33 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:15:25.152 14:34:33 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:25.152 14:34:33 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:15:25.153 14:34:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:15:25.153 14:34:33 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:25.153 14:34:33 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:25.153 14:34:33 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:25.153 14:34:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:15:25.153 14:34:33 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:25.153 14:34:33 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:25.153 14:34:33 -- setup/devices.sh@53 -- # local found=0 00:15:25.153 14:34:33 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:15:25.153 14:34:33 -- setup/devices.sh@56 -- # : 00:15:25.153 14:34:33 -- setup/devices.sh@59 -- # local pci status 00:15:25.153 14:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.153 14:34:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:25.153 14:34:33 -- setup/devices.sh@47 -- # setup output config 00:15:25.153 14:34:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:25.153 14:34:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:25.153 14:34:33 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.153 14:34:33 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:15:25.153 14:34:33 -- setup/devices.sh@63 -- # found=1 00:15:25.153 14:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.153 14:34:33 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.153 14:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.411 14:34:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.411 14:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.411 14:34:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.411 14:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.411 14:34:33 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.411 14:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.978 14:34:34 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:25.978 14:34:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.978 lsblk: /dev/nvme3c3n1: not a block device 00:15:25.978 14:34:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:25.978 14:34:34 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:15:25.978 14:34:34 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:25.978 14:34:34 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:15:25.978 14:34:34 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:25.978 14:34:34 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:25.978 14:34:34 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:15:25.978 14:34:34 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:25.978 14:34:34 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:15:25.978 14:34:34 -- setup/devices.sh@50 -- # local mount_point= 00:15:25.978 14:34:34 -- setup/devices.sh@51 -- # local test_file= 00:15:25.978 14:34:34 -- setup/devices.sh@53 -- # local found=0 00:15:25.978 14:34:34 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:15:25.978 14:34:34 -- setup/devices.sh@59 -- # local pci status 00:15:25.978 14:34:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:25.978 14:34:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:25.978 14:34:34 -- setup/devices.sh@47 -- # setup output config 00:15:25.978 14:34:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:25.978 14:34:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:26.542 14:34:34 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:26.542 14:34:34 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:15:26.542 14:34:34 -- setup/devices.sh@63 -- # found=1 00:15:26.542 14:34:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:26.542 14:34:34 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:26.542 14:34:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:26.542 14:34:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:26.542 14:34:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:26.800 14:34:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:26.800 14:34:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:26.800 14:34:35 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:26.800 14:34:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:27.059 14:34:35 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:27.059 14:34:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:27.059 lsblk: /dev/nvme3c3n1: not a block device 00:15:27.317 14:34:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:27.317 14:34:35 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:15:27.317 14:34:35 -- setup/devices.sh@68 -- # return 0 00:15:27.317 14:34:35 -- setup/devices.sh@187 -- # cleanup_dm 00:15:27.317 14:34:35 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:27.317 14:34:35 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:15:27.317 14:34:35 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:15:27.317 14:34:35 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:27.317 14:34:35 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:15:27.317 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:15:27.317 14:34:35 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:15:27.317 14:34:35 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:15:27.317 00:15:27.317 real 0m5.622s 00:15:27.317 user 0m1.063s 00:15:27.317 sys 0m1.450s 00:15:27.317 14:34:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:27.317 14:34:35 -- common/autotest_common.sh@10 -- # set +x 00:15:27.317 ************************************ 00:15:27.317 END TEST dm_mount 00:15:27.317 ************************************ 00:15:27.317 14:34:35 -- setup/devices.sh@1 -- # cleanup 00:15:27.317 14:34:35 -- setup/devices.sh@11 -- # cleanup_nvme 00:15:27.317 14:34:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:27.317 14:34:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:27.317 14:34:35 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:15:27.317 14:34:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:15:27.317 14:34:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:15:27.622 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:15:27.622 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:15:27.622 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:15:27.622 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:15:27.622 14:34:36 -- setup/devices.sh@12 -- # cleanup_dm 00:15:27.623 14:34:36 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:27.623 14:34:36 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:15:27.623 14:34:36 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:27.623 14:34:36 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:15:27.623 14:34:36 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:15:27.623 14:34:36 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:15:27.623 00:15:27.623 real 0m14.064s 00:15:27.623 user 0m3.795s 00:15:27.623 sys 0m4.533s 00:15:27.623 14:34:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:27.623 14:34:36 -- common/autotest_common.sh@10 -- # set +x 00:15:27.623 ************************************ 00:15:27.623 END TEST devices 00:15:27.623 ************************************ 00:15:27.884 00:15:27.884 real 0m48.731s 00:15:27.884 user 0m11.441s 00:15:27.884 sys 0m16.427s 00:15:27.884 14:34:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:27.884 14:34:36 -- common/autotest_common.sh@10 -- # set +x 00:15:27.884 ************************************ 00:15:27.884 END TEST setup.sh 00:15:27.884 ************************************ 00:15:27.884 14:34:36 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:28.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:29.014 lsblk: /dev/nvme3c3n1: not a block device 00:15:29.014 Hugepages 00:15:29.014 node hugesize free / total 00:15:29.014 node0 1048576kB 0 / 0 00:15:29.014 node0 2048kB 2048 / 2048 00:15:29.014 00:15:29.014 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:29.014 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:15:29.273 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:15:29.273 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:15:29.532 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:15:29.532 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3c3n1 00:15:29.532 14:34:38 -- spdk/autotest.sh@130 -- # uname -s 00:15:29.532 14:34:38 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:15:29.532 14:34:38 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:15:29.532 14:34:38 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:30.100 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:30.667 lsblk: /dev/nvme3c3n1: not a block device 00:15:30.924 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:30.924 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:30.924 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:30.924 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:31.182 14:34:39 -- common/autotest_common.sh@1518 -- # sleep 1 00:15:32.117 14:34:40 -- common/autotest_common.sh@1519 -- # bdfs=() 00:15:32.117 14:34:40 -- common/autotest_common.sh@1519 -- # local bdfs 00:15:32.117 14:34:40 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:15:32.117 14:34:40 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:15:32.117 14:34:40 -- common/autotest_common.sh@1499 -- # bdfs=() 00:15:32.117 14:34:40 -- common/autotest_common.sh@1499 -- # local bdfs 00:15:32.117 14:34:40 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:32.117 14:34:40 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:32.117 14:34:40 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:15:32.117 14:34:40 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:15:32.117 14:34:40 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:32.117 14:34:40 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:32.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:32.683 Waiting for block devices as requested 00:15:32.941 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:32.941 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:32.941 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:33.199 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:38.495 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:38.495 14:34:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:38.495 14:34:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:15:38.495 14:34:46 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:15:38.495 14:34:46 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:15:38.495 14:34:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:15:38.495 14:34:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:38.495 14:34:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:38.495 14:34:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:38.495 14:34:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1543 -- # continue 00:15:38.495 14:34:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:38.495 14:34:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:15:38.495 14:34:46 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:15:38.495 14:34:46 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:15:38.495 14:34:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:15:38.495 14:34:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:38.495 14:34:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:38.495 14:34:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:38.495 14:34:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1543 -- # continue 00:15:38.495 14:34:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:38.495 14:34:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # grep 0000:00:12.0/nvme/nvme 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:38.495 14:34:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:38.495 14:34:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:38.495 14:34:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1543 -- # continue 00:15:38.495 14:34:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:38.495 14:34:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # grep 0000:00:13.0/nvme/nvme 00:15:38.495 14:34:46 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:15:38.495 14:34:46 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:15:38.495 14:34:46 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme3 00:15:38.495 14:34:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:15:38.495 14:34:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:38.495 14:34:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:38.495 14:34:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:38.495 14:34:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:38.495 14:34:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:38.495 14:34:46 -- common/autotest_common.sh@1543 -- # continue 00:15:38.495 14:34:46 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:15:38.495 14:34:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:38.495 14:34:46 -- common/autotest_common.sh@10 -- # set +x 00:15:38.495 14:34:46 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:15:38.495 14:34:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:38.495 14:34:46 -- common/autotest_common.sh@10 -- # set +x 00:15:38.495 14:34:46 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:39.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:39.633 lsblk: /dev/nvme3c3n1: not a block device 00:15:39.891 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:39.891 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:39.891 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:39.891 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:39.891 14:34:48 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:15:39.891 14:34:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:39.891 14:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.149 14:34:48 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:15:40.149 14:34:48 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:15:40.149 14:34:48 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:15:40.149 14:34:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:15:40.149 14:34:48 -- common/autotest_common.sh@1563 -- # local bdfs 00:15:40.149 14:34:48 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:15:40.149 14:34:48 -- common/autotest_common.sh@1499 -- # bdfs=() 00:15:40.149 14:34:48 -- common/autotest_common.sh@1499 -- # local bdfs 00:15:40.149 14:34:48 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:40.149 14:34:48 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:40.149 14:34:48 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:15:40.149 14:34:48 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:15:40.149 14:34:48 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:40.149 14:34:48 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:15:40.149 14:34:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:15:40.149 14:34:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:40.149 14:34:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:40.149 14:34:48 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:15:40.149 14:34:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:15:40.149 14:34:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:40.149 14:34:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:40.149 14:34:48 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:15:40.149 14:34:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:15:40.149 14:34:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:40.149 14:34:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:40.149 14:34:48 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:15:40.149 14:34:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:15:40.149 14:34:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:40.149 14:34:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:40.149 14:34:48 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:15:40.149 14:34:48 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:15:40.149 14:34:48 -- common/autotest_common.sh@1579 -- # return 0 00:15:40.149 14:34:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:15:40.149 14:34:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:15:40.149 14:34:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:15:40.149 14:34:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:15:40.149 14:34:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:15:40.149 14:34:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:40.149 14:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.149 14:34:48 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:40.149 14:34:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:40.149 14:34:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.149 14:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.149 ************************************ 00:15:40.149 START TEST env 00:15:40.149 ************************************ 00:15:40.149 14:34:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:40.409 * Looking for test storage... 00:15:40.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:15:40.409 14:34:48 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:40.409 14:34:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:40.409 14:34:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.409 14:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:40.409 ************************************ 00:15:40.409 START TEST env_memory 00:15:40.409 ************************************ 00:15:40.409 14:34:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:40.409 00:15:40.409 00:15:40.409 CUnit - A unit testing framework for C - Version 2.1-3 00:15:40.409 http://cunit.sourceforge.net/ 00:15:40.409 00:15:40.409 00:15:40.409 Suite: memory 00:15:40.409 Test: alloc and free memory map ...[2024-04-17 14:34:48.973484] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:15:40.667 passed 00:15:40.667 Test: mem map translation ...[2024-04-17 14:34:49.042618] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:15:40.667 [2024-04-17 14:34:49.042914] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:15:40.667 [2024-04-17 14:34:49.043203] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:15:40.667 [2024-04-17 14:34:49.043383] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:15:40.667 passed 00:15:40.667 Test: mem map registration ...[2024-04-17 14:34:49.163909] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:15:40.667 [2024-04-17 14:34:49.164179] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:15:40.667 passed 00:15:40.667 Test: mem map adjacent registrations ...passed 00:15:40.667 00:15:40.667 Run Summary: Type Total Ran Passed Failed Inactive 00:15:40.667 suites 1 1 n/a 0 0 00:15:40.667 tests 4 4 4 0 0 00:15:40.667 asserts 152 152 152 0 n/a 00:15:40.667 00:15:40.926 Elapsed time = 0.346 seconds 00:15:40.926 00:15:40.926 real 0m0.394s 00:15:40.926 user 0m0.357s 00:15:40.926 sys 0m0.029s 00:15:40.926 14:34:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:40.926 14:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.926 ************************************ 00:15:40.926 END TEST env_memory 00:15:40.926 ************************************ 00:15:40.926 14:34:49 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:40.926 14:34:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:40.926 14:34:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.926 14:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:40.926 ************************************ 00:15:40.926 START TEST env_vtophys 00:15:40.926 ************************************ 00:15:40.926 14:34:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:40.926 EAL: lib.eal log level changed from notice to debug 00:15:40.926 EAL: Detected lcore 0 as core 0 on socket 0 00:15:40.926 EAL: Detected lcore 1 as core 0 on socket 0 00:15:40.926 EAL: Detected lcore 2 as core 0 on socket 0 00:15:40.926 EAL: Detected lcore 3 as core 0 on socket 0 00:15:40.926 EAL: Detected lcore 4 as core 0 on socket 0 00:15:40.926 EAL: Detected lcore 5 as core 0 on socket 0 00:15:40.926 EAL: Detected lcore 6 as core 0 on socket 0 00:15:40.926 EAL: Detected lcore 7 as core 0 on socket 0 00:15:40.926 EAL: Detected lcore 8 as core 0 on socket 0 00:15:40.926 EAL: Detected lcore 9 as core 0 on socket 0 00:15:40.926 EAL: Maximum logical cores by configuration: 128 00:15:40.926 EAL: Detected CPU lcores: 10 00:15:40.926 EAL: Detected NUMA nodes: 1 00:15:40.926 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:15:40.926 EAL: Detected shared linkage of DPDK 00:15:40.926 EAL: No shared files mode enabled, IPC will be disabled 00:15:41.185 EAL: Selected IOVA mode 'PA' 00:15:41.185 EAL: Probing VFIO support... 00:15:41.185 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:15:41.185 EAL: VFIO modules not loaded, skipping VFIO support... 00:15:41.185 EAL: Ask a virtual area of 0x2e000 bytes 00:15:41.185 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:15:41.185 EAL: Setting up physically contiguous memory... 00:15:41.185 EAL: Setting maximum number of open files to 524288 00:15:41.185 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:15:41.185 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:15:41.185 EAL: Ask a virtual area of 0x61000 bytes 00:15:41.185 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:15:41.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:41.185 EAL: Ask a virtual area of 0x400000000 bytes 00:15:41.185 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:15:41.185 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:15:41.185 EAL: Ask a virtual area of 0x61000 bytes 00:15:41.185 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:15:41.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:41.185 EAL: Ask a virtual area of 0x400000000 bytes 00:15:41.185 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:15:41.185 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:15:41.185 EAL: Ask a virtual area of 0x61000 bytes 00:15:41.185 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:15:41.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:41.185 EAL: Ask a virtual area of 0x400000000 bytes 00:15:41.185 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:15:41.185 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:15:41.185 EAL: Ask a virtual area of 0x61000 bytes 00:15:41.185 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:15:41.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:41.185 EAL: Ask a virtual area of 0x400000000 bytes 00:15:41.185 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:15:41.185 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:15:41.185 EAL: Hugepages will be freed exactly as allocated. 00:15:41.185 EAL: No shared files mode enabled, IPC is disabled 00:15:41.185 EAL: No shared files mode enabled, IPC is disabled 00:15:41.185 EAL: TSC frequency is ~2100000 KHz 00:15:41.185 EAL: Main lcore 0 is ready (tid=7fea8c81fa40;cpuset=[0]) 00:15:41.185 EAL: Trying to obtain current memory policy. 00:15:41.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:41.185 EAL: Restoring previous memory policy: 0 00:15:41.185 EAL: request: mp_malloc_sync 00:15:41.185 EAL: No shared files mode enabled, IPC is disabled 00:15:41.185 EAL: Heap on socket 0 was expanded by 2MB 00:15:41.185 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:15:41.185 EAL: No PCI address specified using 'addr=' in: bus=pci 00:15:41.185 EAL: Mem event callback 'spdk:(nil)' registered 00:15:41.185 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:15:41.185 00:15:41.185 00:15:41.185 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.185 http://cunit.sourceforge.net/ 00:15:41.185 00:15:41.185 00:15:41.185 Suite: components_suite 00:15:41.752 Test: vtophys_malloc_test ...passed 00:15:41.752 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:15:41.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:41.752 EAL: Restoring previous memory policy: 4 00:15:41.752 EAL: Calling mem event callback 'spdk:(nil)' 00:15:41.752 EAL: request: mp_malloc_sync 00:15:41.752 EAL: No shared files mode enabled, IPC is disabled 00:15:41.752 EAL: Heap on socket 0 was expanded by 4MB 00:15:41.752 EAL: Calling mem event callback 'spdk:(nil)' 00:15:41.752 EAL: request: mp_malloc_sync 00:15:41.752 EAL: No shared files mode enabled, IPC is disabled 00:15:41.752 EAL: Heap on socket 0 was shrunk by 4MB 00:15:41.752 EAL: Trying to obtain current memory policy. 00:15:41.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:41.752 EAL: Restoring previous memory policy: 4 00:15:41.752 EAL: Calling mem event callback 'spdk:(nil)' 00:15:41.752 EAL: request: mp_malloc_sync 00:15:41.752 EAL: No shared files mode enabled, IPC is disabled 00:15:41.752 EAL: Heap on socket 0 was expanded by 6MB 00:15:41.752 EAL: Calling mem event callback 'spdk:(nil)' 00:15:41.752 EAL: request: mp_malloc_sync 00:15:41.752 EAL: No shared files mode enabled, IPC is disabled 00:15:41.752 EAL: Heap on socket 0 was shrunk by 6MB 00:15:41.752 EAL: Trying to obtain current memory policy. 00:15:41.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:41.752 EAL: Restoring previous memory policy: 4 00:15:41.752 EAL: Calling mem event callback 'spdk:(nil)' 00:15:41.752 EAL: request: mp_malloc_sync 00:15:41.752 EAL: No shared files mode enabled, IPC is disabled 00:15:41.752 EAL: Heap on socket 0 was expanded by 10MB 00:15:41.752 EAL: Calling mem event callback 'spdk:(nil)' 00:15:41.752 EAL: request: mp_malloc_sync 00:15:41.752 EAL: No shared files mode enabled, IPC is disabled 00:15:41.752 EAL: Heap on socket 0 was shrunk by 10MB 00:15:41.752 EAL: Trying to obtain current memory policy. 00:15:41.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:41.752 EAL: Restoring previous memory policy: 4 00:15:41.752 EAL: Calling mem event callback 'spdk:(nil)' 00:15:41.752 EAL: request: mp_malloc_sync 00:15:41.752 EAL: No shared files mode enabled, IPC is disabled 00:15:41.752 EAL: Heap on socket 0 was expanded by 18MB 00:15:41.752 EAL: Calling mem event callback 'spdk:(nil)' 00:15:41.752 EAL: request: mp_malloc_sync 00:15:41.752 EAL: No shared files mode enabled, IPC is disabled 00:15:41.752 EAL: Heap on socket 0 was shrunk by 18MB 00:15:41.752 EAL: Trying to obtain current memory policy. 00:15:41.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:41.752 EAL: Restoring previous memory policy: 4 00:15:41.752 EAL: Calling mem event callback 'spdk:(nil)' 00:15:41.752 EAL: request: mp_malloc_sync 00:15:41.752 EAL: No shared files mode enabled, IPC is disabled 00:15:41.752 EAL: Heap on socket 0 was expanded by 34MB 00:15:42.010 EAL: Calling mem event callback 'spdk:(nil)' 00:15:42.010 EAL: request: mp_malloc_sync 00:15:42.010 EAL: No shared files mode enabled, IPC is disabled 00:15:42.010 EAL: Heap on socket 0 was shrunk by 34MB 00:15:42.010 EAL: Trying to obtain current memory policy. 00:15:42.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:42.010 EAL: Restoring previous memory policy: 4 00:15:42.010 EAL: Calling mem event callback 'spdk:(nil)' 00:15:42.010 EAL: request: mp_malloc_sync 00:15:42.010 EAL: No shared files mode enabled, IPC is disabled 00:15:42.010 EAL: Heap on socket 0 was expanded by 66MB 00:15:42.269 EAL: Calling mem event callback 'spdk:(nil)' 00:15:42.269 EAL: request: mp_malloc_sync 00:15:42.269 EAL: No shared files mode enabled, IPC is disabled 00:15:42.269 EAL: Heap on socket 0 was shrunk by 66MB 00:15:42.269 EAL: Trying to obtain current memory policy. 00:15:42.269 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:42.269 EAL: Restoring previous memory policy: 4 00:15:42.269 EAL: Calling mem event callback 'spdk:(nil)' 00:15:42.269 EAL: request: mp_malloc_sync 00:15:42.269 EAL: No shared files mode enabled, IPC is disabled 00:15:42.269 EAL: Heap on socket 0 was expanded by 130MB 00:15:42.835 EAL: Calling mem event callback 'spdk:(nil)' 00:15:42.835 EAL: request: mp_malloc_sync 00:15:42.835 EAL: No shared files mode enabled, IPC is disabled 00:15:42.835 EAL: Heap on socket 0 was shrunk by 130MB 00:15:42.835 EAL: Trying to obtain current memory policy. 00:15:42.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:42.835 EAL: Restoring previous memory policy: 4 00:15:42.835 EAL: Calling mem event callback 'spdk:(nil)' 00:15:42.835 EAL: request: mp_malloc_sync 00:15:42.835 EAL: No shared files mode enabled, IPC is disabled 00:15:42.835 EAL: Heap on socket 0 was expanded by 258MB 00:15:43.770 EAL: Calling mem event callback 'spdk:(nil)' 00:15:43.770 EAL: request: mp_malloc_sync 00:15:43.770 EAL: No shared files mode enabled, IPC is disabled 00:15:43.770 EAL: Heap on socket 0 was shrunk by 258MB 00:15:44.028 EAL: Trying to obtain current memory policy. 00:15:44.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:44.028 EAL: Restoring previous memory policy: 4 00:15:44.028 EAL: Calling mem event callback 'spdk:(nil)' 00:15:44.028 EAL: request: mp_malloc_sync 00:15:44.028 EAL: No shared files mode enabled, IPC is disabled 00:15:44.028 EAL: Heap on socket 0 was expanded by 514MB 00:15:45.403 EAL: Calling mem event callback 'spdk:(nil)' 00:15:45.403 EAL: request: mp_malloc_sync 00:15:45.403 EAL: No shared files mode enabled, IPC is disabled 00:15:45.403 EAL: Heap on socket 0 was shrunk by 514MB 00:15:46.340 EAL: Trying to obtain current memory policy. 00:15:46.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:46.340 EAL: Restoring previous memory policy: 4 00:15:46.340 EAL: Calling mem event callback 'spdk:(nil)' 00:15:46.340 EAL: request: mp_malloc_sync 00:15:46.340 EAL: No shared files mode enabled, IPC is disabled 00:15:46.340 EAL: Heap on socket 0 was expanded by 1026MB 00:15:48.873 EAL: Calling mem event callback 'spdk:(nil)' 00:15:49.132 EAL: request: mp_malloc_sync 00:15:49.132 EAL: No shared files mode enabled, IPC is disabled 00:15:49.132 EAL: Heap on socket 0 was shrunk by 1026MB 00:15:51.033 passed 00:15:51.033 00:15:51.033 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.033 suites 1 1 n/a 0 0 00:15:51.033 tests 2 2 2 0 0 00:15:51.033 asserts 6496 6496 6496 0 n/a 00:15:51.033 00:15:51.033 Elapsed time = 9.419 seconds 00:15:51.033 EAL: Calling mem event callback 'spdk:(nil)' 00:15:51.033 EAL: request: mp_malloc_sync 00:15:51.033 EAL: No shared files mode enabled, IPC is disabled 00:15:51.033 EAL: Heap on socket 0 was shrunk by 2MB 00:15:51.033 EAL: No shared files mode enabled, IPC is disabled 00:15:51.033 EAL: No shared files mode enabled, IPC is disabled 00:15:51.033 EAL: No shared files mode enabled, IPC is disabled 00:15:51.033 00:15:51.033 real 0m9.819s 00:15:51.033 user 0m8.704s 00:15:51.033 sys 0m0.917s 00:15:51.033 14:34:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:51.033 14:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:51.033 ************************************ 00:15:51.033 END TEST env_vtophys 00:15:51.033 ************************************ 00:15:51.033 14:34:59 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:51.033 14:34:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:51.033 14:34:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.033 14:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:51.033 ************************************ 00:15:51.033 START TEST env_pci 00:15:51.033 ************************************ 00:15:51.033 14:34:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:51.033 00:15:51.033 00:15:51.033 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.033 http://cunit.sourceforge.net/ 00:15:51.033 00:15:51.033 00:15:51.033 Suite: pci 00:15:51.033 Test: pci_hook ...[2024-04-17 14:34:59.414071] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61817 has claimed it 00:15:51.033 passed 00:15:51.033 00:15:51.033 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.033 suites 1 1 n/a 0 0 00:15:51.033 tests 1 1 1 0 0 00:15:51.033 asserts 25 25 25 0 n/a 00:15:51.033 00:15:51.033 Elapsed time = 0.010 seconds 00:15:51.033 EAL: Cannot find device (10000:00:01.0) 00:15:51.033 EAL: Failed to attach device on primary process 00:15:51.033 00:15:51.033 real 0m0.092s 00:15:51.033 user 0m0.041s 00:15:51.033 sys 0m0.050s 00:15:51.033 14:34:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:51.033 14:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:51.033 ************************************ 00:15:51.033 END TEST env_pci 00:15:51.033 ************************************ 00:15:51.033 14:34:59 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:15:51.033 14:34:59 -- env/env.sh@15 -- # uname 00:15:51.033 14:34:59 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:15:51.033 14:34:59 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:15:51.033 14:34:59 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:51.033 14:34:59 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:51.033 14:34:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.033 14:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:51.033 ************************************ 00:15:51.033 START TEST env_dpdk_post_init 00:15:51.033 ************************************ 00:15:51.033 14:34:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:51.291 EAL: Detected CPU lcores: 10 00:15:51.291 EAL: Detected NUMA nodes: 1 00:15:51.291 EAL: Detected shared linkage of DPDK 00:15:51.291 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:51.291 EAL: Selected IOVA mode 'PA' 00:15:51.291 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:51.291 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:15:51.291 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:15:51.291 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:15:51.291 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:15:51.558 Starting DPDK initialization... 00:15:51.558 Starting SPDK post initialization... 00:15:51.558 SPDK NVMe probe 00:15:51.558 Attaching to 0000:00:10.0 00:15:51.559 Attaching to 0000:00:11.0 00:15:51.559 Attaching to 0000:00:12.0 00:15:51.559 Attaching to 0000:00:13.0 00:15:51.559 Attached to 0000:00:10.0 00:15:51.559 Attached to 0000:00:11.0 00:15:51.559 Attached to 0000:00:13.0 00:15:51.559 Attached to 0000:00:12.0 00:15:51.559 Cleaning up... 00:15:51.559 00:15:51.559 real 0m0.371s 00:15:51.559 user 0m0.138s 00:15:51.559 sys 0m0.133s 00:15:51.559 14:34:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:51.559 14:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:51.559 ************************************ 00:15:51.559 END TEST env_dpdk_post_init 00:15:51.559 ************************************ 00:15:51.559 14:35:00 -- env/env.sh@26 -- # uname 00:15:51.559 14:35:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:15:51.559 14:35:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:51.559 14:35:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:51.559 14:35:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.559 14:35:00 -- common/autotest_common.sh@10 -- # set +x 00:15:51.559 ************************************ 00:15:51.559 START TEST env_mem_callbacks 00:15:51.559 ************************************ 00:15:51.559 14:35:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:51.821 EAL: Detected CPU lcores: 10 00:15:51.821 EAL: Detected NUMA nodes: 1 00:15:51.821 EAL: Detected shared linkage of DPDK 00:15:51.821 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:51.821 EAL: Selected IOVA mode 'PA' 00:15:51.821 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:51.821 00:15:51.821 00:15:51.821 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.821 http://cunit.sourceforge.net/ 00:15:51.821 00:15:51.821 00:15:51.821 Suite: memory 00:15:51.821 Test: test ... 00:15:51.821 register 0x200000200000 2097152 00:15:51.821 malloc 3145728 00:15:51.821 register 0x200000400000 4194304 00:15:51.821 buf 0x2000004fffc0 len 3145728 PASSED 00:15:51.821 malloc 64 00:15:51.821 buf 0x2000004ffec0 len 64 PASSED 00:15:51.821 malloc 4194304 00:15:51.821 register 0x200000800000 6291456 00:15:51.821 buf 0x2000009fffc0 len 4194304 PASSED 00:15:51.821 free 0x2000004fffc0 3145728 00:15:51.821 free 0x2000004ffec0 64 00:15:51.821 unregister 0x200000400000 4194304 PASSED 00:15:51.821 free 0x2000009fffc0 4194304 00:15:51.821 unregister 0x200000800000 6291456 PASSED 00:15:51.821 malloc 8388608 00:15:51.821 register 0x200000400000 10485760 00:15:51.821 buf 0x2000005fffc0 len 8388608 PASSED 00:15:51.821 free 0x2000005fffc0 8388608 00:15:51.821 unregister 0x200000400000 10485760 PASSED 00:15:52.079 passed 00:15:52.079 00:15:52.079 Run Summary: Type Total Ran Passed Failed Inactive 00:15:52.079 suites 1 1 n/a 0 0 00:15:52.079 tests 1 1 1 0 0 00:15:52.080 asserts 15 15 15 0 n/a 00:15:52.080 00:15:52.080 Elapsed time = 0.110 seconds 00:15:52.080 00:15:52.080 real 0m0.339s 00:15:52.080 user 0m0.152s 00:15:52.080 sys 0m0.082s 00:15:52.080 14:35:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:52.080 ************************************ 00:15:52.080 END TEST env_mem_callbacks 00:15:52.080 14:35:00 -- common/autotest_common.sh@10 -- # set +x 00:15:52.080 ************************************ 00:15:52.080 00:15:52.080 real 0m11.798s 00:15:52.080 user 0m9.646s 00:15:52.080 sys 0m1.660s 00:15:52.080 14:35:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:52.080 14:35:00 -- common/autotest_common.sh@10 -- # set +x 00:15:52.080 ************************************ 00:15:52.080 END TEST env 00:15:52.080 ************************************ 00:15:52.080 14:35:00 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:52.080 14:35:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:52.080 14:35:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.080 14:35:00 -- common/autotest_common.sh@10 -- # set +x 00:15:52.080 ************************************ 00:15:52.080 START TEST rpc 00:15:52.080 ************************************ 00:15:52.080 14:35:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:52.338 * Looking for test storage... 00:15:52.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:52.338 14:35:00 -- rpc/rpc.sh@65 -- # spdk_pid=61953 00:15:52.338 14:35:00 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:52.338 14:35:00 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:15:52.338 14:35:00 -- rpc/rpc.sh@67 -- # waitforlisten 61953 00:15:52.338 14:35:00 -- common/autotest_common.sh@817 -- # '[' -z 61953 ']' 00:15:52.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.338 14:35:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.338 14:35:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:52.338 14:35:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.338 14:35:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:52.338 14:35:00 -- common/autotest_common.sh@10 -- # set +x 00:15:52.338 [2024-04-17 14:35:00.895544] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:52.338 [2024-04-17 14:35:00.895709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61953 ] 00:15:52.595 [2024-04-17 14:35:01.083294] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.854 [2024-04-17 14:35:01.392481] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:15:52.854 [2024-04-17 14:35:01.392796] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 61953' to capture a snapshot of events at runtime. 00:15:52.854 [2024-04-17 14:35:01.392928] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.854 [2024-04-17 14:35:01.392993] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.854 [2024-04-17 14:35:01.393028] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid61953 for offline analysis/debug. 00:15:52.854 [2024-04-17 14:35:01.393123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.231 14:35:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:54.231 14:35:02 -- common/autotest_common.sh@850 -- # return 0 00:15:54.231 14:35:02 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:54.231 14:35:02 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:54.231 14:35:02 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:15:54.231 14:35:02 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:15:54.231 14:35:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:54.231 14:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.231 14:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:54.231 ************************************ 00:15:54.231 START TEST rpc_integrity 00:15:54.231 ************************************ 00:15:54.231 14:35:02 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:15:54.231 14:35:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:54.231 14:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.231 14:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:54.231 14:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.231 14:35:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:54.231 14:35:02 -- rpc/rpc.sh@13 -- # jq length 00:15:54.231 14:35:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:54.231 14:35:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:54.231 14:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.231 14:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:54.231 14:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.231 14:35:02 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:15:54.231 14:35:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:54.231 14:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.231 14:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:54.231 14:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.231 14:35:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:54.231 { 00:15:54.231 "name": "Malloc0", 00:15:54.231 "aliases": [ 00:15:54.231 "b81d0ad1-2f5a-4b96-b172-af0e35ac5723" 00:15:54.231 ], 00:15:54.231 "product_name": "Malloc disk", 00:15:54.231 "block_size": 512, 00:15:54.231 "num_blocks": 16384, 00:15:54.231 "uuid": "b81d0ad1-2f5a-4b96-b172-af0e35ac5723", 00:15:54.231 "assigned_rate_limits": { 00:15:54.231 "rw_ios_per_sec": 0, 00:15:54.231 "rw_mbytes_per_sec": 0, 00:15:54.231 "r_mbytes_per_sec": 0, 00:15:54.231 "w_mbytes_per_sec": 0 00:15:54.231 }, 00:15:54.231 "claimed": false, 00:15:54.231 "zoned": false, 00:15:54.231 "supported_io_types": { 00:15:54.231 "read": true, 00:15:54.231 "write": true, 00:15:54.231 "unmap": true, 00:15:54.231 "write_zeroes": true, 00:15:54.231 "flush": true, 00:15:54.231 "reset": true, 00:15:54.231 "compare": false, 00:15:54.231 "compare_and_write": false, 00:15:54.231 "abort": true, 00:15:54.231 "nvme_admin": false, 00:15:54.231 "nvme_io": false 00:15:54.231 }, 00:15:54.231 "memory_domains": [ 00:15:54.231 { 00:15:54.231 "dma_device_id": "system", 00:15:54.231 "dma_device_type": 1 00:15:54.231 }, 00:15:54.231 { 00:15:54.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.231 "dma_device_type": 2 00:15:54.231 } 00:15:54.231 ], 00:15:54.231 "driver_specific": {} 00:15:54.231 } 00:15:54.231 ]' 00:15:54.231 14:35:02 -- rpc/rpc.sh@17 -- # jq length 00:15:54.231 14:35:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:54.231 14:35:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:15:54.231 14:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.231 14:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:54.231 [2024-04-17 14:35:02.675062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:15:54.231 [2024-04-17 14:35:02.675242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.231 [2024-04-17 14:35:02.675306] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:54.231 [2024-04-17 14:35:02.675413] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.231 [2024-04-17 14:35:02.678095] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.231 [2024-04-17 14:35:02.678248] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:54.231 Passthru0 00:15:54.231 14:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.231 14:35:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:54.231 14:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.231 14:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:54.231 14:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.231 14:35:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:54.231 { 00:15:54.231 "name": "Malloc0", 00:15:54.231 "aliases": [ 00:15:54.231 "b81d0ad1-2f5a-4b96-b172-af0e35ac5723" 00:15:54.231 ], 00:15:54.231 "product_name": "Malloc disk", 00:15:54.231 "block_size": 512, 00:15:54.231 "num_blocks": 16384, 00:15:54.231 "uuid": "b81d0ad1-2f5a-4b96-b172-af0e35ac5723", 00:15:54.231 "assigned_rate_limits": { 00:15:54.231 "rw_ios_per_sec": 0, 00:15:54.231 "rw_mbytes_per_sec": 0, 00:15:54.231 "r_mbytes_per_sec": 0, 00:15:54.231 "w_mbytes_per_sec": 0 00:15:54.231 }, 00:15:54.231 "claimed": true, 00:15:54.231 "claim_type": "exclusive_write", 00:15:54.231 "zoned": false, 00:15:54.231 "supported_io_types": { 00:15:54.231 "read": true, 00:15:54.231 "write": true, 00:15:54.231 "unmap": true, 00:15:54.231 "write_zeroes": true, 00:15:54.231 "flush": true, 00:15:54.231 "reset": true, 00:15:54.231 "compare": false, 00:15:54.231 "compare_and_write": false, 00:15:54.231 "abort": true, 00:15:54.231 "nvme_admin": false, 00:15:54.231 "nvme_io": false 00:15:54.231 }, 00:15:54.231 "memory_domains": [ 00:15:54.231 { 00:15:54.231 "dma_device_id": "system", 00:15:54.231 "dma_device_type": 1 00:15:54.231 }, 00:15:54.231 { 00:15:54.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.231 "dma_device_type": 2 00:15:54.231 } 00:15:54.231 ], 00:15:54.231 "driver_specific": {} 00:15:54.231 }, 00:15:54.232 { 00:15:54.232 "name": "Passthru0", 00:15:54.232 "aliases": [ 00:15:54.232 "0fe910a3-b0e3-52e8-bce5-d462147b97da" 00:15:54.232 ], 00:15:54.232 "product_name": "passthru", 00:15:54.232 "block_size": 512, 00:15:54.232 "num_blocks": 16384, 00:15:54.232 "uuid": "0fe910a3-b0e3-52e8-bce5-d462147b97da", 00:15:54.232 "assigned_rate_limits": { 00:15:54.232 "rw_ios_per_sec": 0, 00:15:54.232 "rw_mbytes_per_sec": 0, 00:15:54.232 "r_mbytes_per_sec": 0, 00:15:54.232 "w_mbytes_per_sec": 0 00:15:54.232 }, 00:15:54.232 "claimed": false, 00:15:54.232 "zoned": false, 00:15:54.232 "supported_io_types": { 00:15:54.232 "read": true, 00:15:54.232 "write": true, 00:15:54.232 "unmap": true, 00:15:54.232 "write_zeroes": true, 00:15:54.232 "flush": true, 00:15:54.232 "reset": true, 00:15:54.232 "compare": false, 00:15:54.232 "compare_and_write": false, 00:15:54.232 "abort": true, 00:15:54.232 "nvme_admin": false, 00:15:54.232 "nvme_io": false 00:15:54.232 }, 00:15:54.232 "memory_domains": [ 00:15:54.232 { 00:15:54.232 "dma_device_id": "system", 00:15:54.232 "dma_device_type": 1 00:15:54.232 }, 00:15:54.232 { 00:15:54.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.232 "dma_device_type": 2 00:15:54.232 } 00:15:54.232 ], 00:15:54.232 "driver_specific": { 00:15:54.232 "passthru": { 00:15:54.232 "name": "Passthru0", 00:15:54.232 "base_bdev_name": "Malloc0" 00:15:54.232 } 00:15:54.232 } 00:15:54.232 } 00:15:54.232 ]' 00:15:54.232 14:35:02 -- rpc/rpc.sh@21 -- # jq length 00:15:54.232 14:35:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:54.232 14:35:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:54.232 14:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.232 14:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:54.232 14:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.232 14:35:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:54.232 14:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.232 14:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:54.232 14:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.232 14:35:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:54.232 14:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.232 14:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:54.232 14:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.232 14:35:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:54.232 14:35:02 -- rpc/rpc.sh@26 -- # jq length 00:15:54.491 14:35:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:54.491 00:15:54.491 real 0m0.347s 00:15:54.491 user 0m0.167s 00:15:54.491 sys 0m0.059s 00:15:54.491 14:35:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:54.491 ************************************ 00:15:54.491 14:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:54.491 END TEST rpc_integrity 00:15:54.491 ************************************ 00:15:54.491 14:35:02 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:15:54.491 14:35:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:54.491 14:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.491 14:35:02 -- common/autotest_common.sh@10 -- # set +x 00:15:54.491 ************************************ 00:15:54.491 START TEST rpc_plugins 00:15:54.491 ************************************ 00:15:54.491 14:35:03 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:15:54.491 14:35:03 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:15:54.491 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.491 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:54.491 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.491 14:35:03 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:15:54.491 14:35:03 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:15:54.491 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.491 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:54.491 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.491 14:35:03 -- rpc/rpc.sh@31 -- # bdevs='[ 00:15:54.491 { 00:15:54.491 "name": "Malloc1", 00:15:54.491 "aliases": [ 00:15:54.491 "9169f879-8842-402e-8dae-dd57633d894d" 00:15:54.491 ], 00:15:54.491 "product_name": "Malloc disk", 00:15:54.491 "block_size": 4096, 00:15:54.491 "num_blocks": 256, 00:15:54.491 "uuid": "9169f879-8842-402e-8dae-dd57633d894d", 00:15:54.491 "assigned_rate_limits": { 00:15:54.491 "rw_ios_per_sec": 0, 00:15:54.491 "rw_mbytes_per_sec": 0, 00:15:54.491 "r_mbytes_per_sec": 0, 00:15:54.491 "w_mbytes_per_sec": 0 00:15:54.491 }, 00:15:54.491 "claimed": false, 00:15:54.491 "zoned": false, 00:15:54.491 "supported_io_types": { 00:15:54.491 "read": true, 00:15:54.491 "write": true, 00:15:54.491 "unmap": true, 00:15:54.491 "write_zeroes": true, 00:15:54.491 "flush": true, 00:15:54.491 "reset": true, 00:15:54.491 "compare": false, 00:15:54.491 "compare_and_write": false, 00:15:54.491 "abort": true, 00:15:54.491 "nvme_admin": false, 00:15:54.491 "nvme_io": false 00:15:54.491 }, 00:15:54.491 "memory_domains": [ 00:15:54.491 { 00:15:54.491 "dma_device_id": "system", 00:15:54.491 "dma_device_type": 1 00:15:54.491 }, 00:15:54.491 { 00:15:54.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.491 "dma_device_type": 2 00:15:54.491 } 00:15:54.491 ], 00:15:54.491 "driver_specific": {} 00:15:54.491 } 00:15:54.491 ]' 00:15:54.491 14:35:03 -- rpc/rpc.sh@32 -- # jq length 00:15:54.491 14:35:03 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:15:54.491 14:35:03 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:15:54.491 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.491 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:54.749 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.749 14:35:03 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:15:54.749 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.749 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:54.749 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.749 14:35:03 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:15:54.749 14:35:03 -- rpc/rpc.sh@36 -- # jq length 00:15:54.749 14:35:03 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:15:54.749 00:15:54.749 real 0m0.155s 00:15:54.749 user 0m0.086s 00:15:54.749 sys 0m0.025s 00:15:54.749 14:35:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:54.749 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:54.749 ************************************ 00:15:54.749 END TEST rpc_plugins 00:15:54.750 ************************************ 00:15:54.750 14:35:03 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:15:54.750 14:35:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:54.750 14:35:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.750 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:54.750 ************************************ 00:15:54.750 START TEST rpc_trace_cmd_test 00:15:54.750 ************************************ 00:15:54.750 14:35:03 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:15:54.750 14:35:03 -- rpc/rpc.sh@40 -- # local info 00:15:54.750 14:35:03 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:15:54.750 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.750 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:54.750 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.750 14:35:03 -- rpc/rpc.sh@42 -- # info='{ 00:15:54.750 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid61953", 00:15:54.750 "tpoint_group_mask": "0x8", 00:15:54.750 "iscsi_conn": { 00:15:54.750 "mask": "0x2", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "scsi": { 00:15:54.750 "mask": "0x4", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "bdev": { 00:15:54.750 "mask": "0x8", 00:15:54.750 "tpoint_mask": "0xffffffffffffffff" 00:15:54.750 }, 00:15:54.750 "nvmf_rdma": { 00:15:54.750 "mask": "0x10", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "nvmf_tcp": { 00:15:54.750 "mask": "0x20", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "ftl": { 00:15:54.750 "mask": "0x40", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "blobfs": { 00:15:54.750 "mask": "0x80", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "dsa": { 00:15:54.750 "mask": "0x200", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "thread": { 00:15:54.750 "mask": "0x400", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "nvme_pcie": { 00:15:54.750 "mask": "0x800", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "iaa": { 00:15:54.750 "mask": "0x1000", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "nvme_tcp": { 00:15:54.750 "mask": "0x2000", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "bdev_nvme": { 00:15:54.750 "mask": "0x4000", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 }, 00:15:54.750 "sock": { 00:15:54.750 "mask": "0x8000", 00:15:54.750 "tpoint_mask": "0x0" 00:15:54.750 } 00:15:54.750 }' 00:15:54.750 14:35:03 -- rpc/rpc.sh@43 -- # jq length 00:15:55.008 14:35:03 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:15:55.008 14:35:03 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:15:55.008 14:35:03 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:15:55.008 14:35:03 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:15:55.008 14:35:03 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:15:55.008 14:35:03 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:15:55.008 14:35:03 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:15:55.008 14:35:03 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:15:55.008 14:35:03 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:15:55.008 00:15:55.008 real 0m0.241s 00:15:55.008 user 0m0.195s 00:15:55.008 sys 0m0.039s 00:15:55.008 14:35:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:55.008 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:55.008 ************************************ 00:15:55.008 END TEST rpc_trace_cmd_test 00:15:55.008 ************************************ 00:15:55.008 14:35:03 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:15:55.008 14:35:03 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:15:55.008 14:35:03 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:15:55.008 14:35:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:55.008 14:35:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:55.008 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:55.267 ************************************ 00:15:55.267 START TEST rpc_daemon_integrity 00:15:55.267 ************************************ 00:15:55.267 14:35:03 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:15:55.267 14:35:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:55.267 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.267 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:55.267 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.267 14:35:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:55.267 14:35:03 -- rpc/rpc.sh@13 -- # jq length 00:15:55.267 14:35:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:55.267 14:35:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:55.267 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.267 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:55.267 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.267 14:35:03 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:15:55.267 14:35:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:55.267 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.267 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:55.267 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.267 14:35:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:55.267 { 00:15:55.267 "name": "Malloc2", 00:15:55.267 "aliases": [ 00:15:55.267 "e12a9790-cca8-4932-969b-293e84126bcb" 00:15:55.267 ], 00:15:55.267 "product_name": "Malloc disk", 00:15:55.267 "block_size": 512, 00:15:55.267 "num_blocks": 16384, 00:15:55.267 "uuid": "e12a9790-cca8-4932-969b-293e84126bcb", 00:15:55.267 "assigned_rate_limits": { 00:15:55.267 "rw_ios_per_sec": 0, 00:15:55.267 "rw_mbytes_per_sec": 0, 00:15:55.267 "r_mbytes_per_sec": 0, 00:15:55.267 "w_mbytes_per_sec": 0 00:15:55.267 }, 00:15:55.267 "claimed": false, 00:15:55.267 "zoned": false, 00:15:55.267 "supported_io_types": { 00:15:55.267 "read": true, 00:15:55.267 "write": true, 00:15:55.267 "unmap": true, 00:15:55.267 "write_zeroes": true, 00:15:55.267 "flush": true, 00:15:55.267 "reset": true, 00:15:55.267 "compare": false, 00:15:55.267 "compare_and_write": false, 00:15:55.267 "abort": true, 00:15:55.267 "nvme_admin": false, 00:15:55.267 "nvme_io": false 00:15:55.267 }, 00:15:55.267 "memory_domains": [ 00:15:55.267 { 00:15:55.267 "dma_device_id": "system", 00:15:55.267 "dma_device_type": 1 00:15:55.267 }, 00:15:55.267 { 00:15:55.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.267 "dma_device_type": 2 00:15:55.267 } 00:15:55.267 ], 00:15:55.267 "driver_specific": {} 00:15:55.267 } 00:15:55.267 ]' 00:15:55.267 14:35:03 -- rpc/rpc.sh@17 -- # jq length 00:15:55.267 14:35:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:55.267 14:35:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:15:55.267 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.267 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:55.267 [2024-04-17 14:35:03.827074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:15:55.267 [2024-04-17 14:35:03.827274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.267 [2024-04-17 14:35:03.827338] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:55.267 [2024-04-17 14:35:03.827448] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.267 [2024-04-17 14:35:03.830007] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.267 [2024-04-17 14:35:03.830163] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:55.267 Passthru0 00:15:55.267 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.267 14:35:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:55.267 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.267 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:55.267 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.267 14:35:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:55.267 { 00:15:55.267 "name": "Malloc2", 00:15:55.267 "aliases": [ 00:15:55.267 "e12a9790-cca8-4932-969b-293e84126bcb" 00:15:55.267 ], 00:15:55.267 "product_name": "Malloc disk", 00:15:55.267 "block_size": 512, 00:15:55.267 "num_blocks": 16384, 00:15:55.267 "uuid": "e12a9790-cca8-4932-969b-293e84126bcb", 00:15:55.267 "assigned_rate_limits": { 00:15:55.267 "rw_ios_per_sec": 0, 00:15:55.267 "rw_mbytes_per_sec": 0, 00:15:55.267 "r_mbytes_per_sec": 0, 00:15:55.267 "w_mbytes_per_sec": 0 00:15:55.267 }, 00:15:55.267 "claimed": true, 00:15:55.267 "claim_type": "exclusive_write", 00:15:55.267 "zoned": false, 00:15:55.267 "supported_io_types": { 00:15:55.267 "read": true, 00:15:55.267 "write": true, 00:15:55.267 "unmap": true, 00:15:55.267 "write_zeroes": true, 00:15:55.267 "flush": true, 00:15:55.267 "reset": true, 00:15:55.267 "compare": false, 00:15:55.267 "compare_and_write": false, 00:15:55.267 "abort": true, 00:15:55.267 "nvme_admin": false, 00:15:55.267 "nvme_io": false 00:15:55.267 }, 00:15:55.267 "memory_domains": [ 00:15:55.267 { 00:15:55.267 "dma_device_id": "system", 00:15:55.267 "dma_device_type": 1 00:15:55.267 }, 00:15:55.267 { 00:15:55.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.267 "dma_device_type": 2 00:15:55.267 } 00:15:55.267 ], 00:15:55.267 "driver_specific": {} 00:15:55.267 }, 00:15:55.267 { 00:15:55.267 "name": "Passthru0", 00:15:55.267 "aliases": [ 00:15:55.267 "74eb0631-4e2d-51f7-918b-e91504145297" 00:15:55.267 ], 00:15:55.267 "product_name": "passthru", 00:15:55.267 "block_size": 512, 00:15:55.267 "num_blocks": 16384, 00:15:55.267 "uuid": "74eb0631-4e2d-51f7-918b-e91504145297", 00:15:55.267 "assigned_rate_limits": { 00:15:55.267 "rw_ios_per_sec": 0, 00:15:55.267 "rw_mbytes_per_sec": 0, 00:15:55.267 "r_mbytes_per_sec": 0, 00:15:55.267 "w_mbytes_per_sec": 0 00:15:55.267 }, 00:15:55.267 "claimed": false, 00:15:55.267 "zoned": false, 00:15:55.267 "supported_io_types": { 00:15:55.267 "read": true, 00:15:55.267 "write": true, 00:15:55.267 "unmap": true, 00:15:55.267 "write_zeroes": true, 00:15:55.267 "flush": true, 00:15:55.267 "reset": true, 00:15:55.267 "compare": false, 00:15:55.267 "compare_and_write": false, 00:15:55.267 "abort": true, 00:15:55.267 "nvme_admin": false, 00:15:55.267 "nvme_io": false 00:15:55.267 }, 00:15:55.267 "memory_domains": [ 00:15:55.267 { 00:15:55.267 "dma_device_id": "system", 00:15:55.267 "dma_device_type": 1 00:15:55.267 }, 00:15:55.267 { 00:15:55.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.267 "dma_device_type": 2 00:15:55.267 } 00:15:55.267 ], 00:15:55.267 "driver_specific": { 00:15:55.267 "passthru": { 00:15:55.267 "name": "Passthru0", 00:15:55.267 "base_bdev_name": "Malloc2" 00:15:55.267 } 00:15:55.267 } 00:15:55.267 } 00:15:55.267 ]' 00:15:55.267 14:35:03 -- rpc/rpc.sh@21 -- # jq length 00:15:55.525 14:35:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:55.525 14:35:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:55.525 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.525 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:55.525 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.525 14:35:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:55.525 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.525 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:55.526 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.526 14:35:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:55.526 14:35:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.526 14:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:55.526 14:35:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.526 14:35:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:55.526 14:35:03 -- rpc/rpc.sh@26 -- # jq length 00:15:55.526 14:35:04 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:55.526 00:15:55.526 real 0m0.334s 00:15:55.526 user 0m0.178s 00:15:55.526 sys 0m0.052s 00:15:55.526 14:35:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:55.526 14:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:55.526 ************************************ 00:15:55.526 END TEST rpc_daemon_integrity 00:15:55.526 ************************************ 00:15:55.526 14:35:04 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:55.526 14:35:04 -- rpc/rpc.sh@84 -- # killprocess 61953 00:15:55.526 14:35:04 -- common/autotest_common.sh@936 -- # '[' -z 61953 ']' 00:15:55.526 14:35:04 -- common/autotest_common.sh@940 -- # kill -0 61953 00:15:55.526 14:35:04 -- common/autotest_common.sh@941 -- # uname 00:15:55.526 14:35:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.526 14:35:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61953 00:15:55.526 14:35:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:55.526 14:35:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:55.526 14:35:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61953' 00:15:55.526 killing process with pid 61953 00:15:55.526 14:35:04 -- common/autotest_common.sh@955 -- # kill 61953 00:15:55.526 14:35:04 -- common/autotest_common.sh@960 -- # wait 61953 00:15:58.805 00:15:58.805 real 0m6.187s 00:15:58.805 user 0m6.869s 00:15:58.805 sys 0m1.071s 00:15:58.805 14:35:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:58.805 14:35:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.805 ************************************ 00:15:58.805 END TEST rpc 00:15:58.805 ************************************ 00:15:58.805 14:35:06 -- spdk/autotest.sh@166 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:58.805 14:35:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:58.805 14:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:58.805 14:35:06 -- common/autotest_common.sh@10 -- # set +x 00:15:58.805 ************************************ 00:15:58.805 START TEST rpc_client 00:15:58.805 ************************************ 00:15:58.805 14:35:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:58.805 * Looking for test storage... 00:15:58.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:15:58.805 14:35:07 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:15:58.805 OK 00:15:58.805 14:35:07 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:15:58.805 00:15:58.805 real 0m0.177s 00:15:58.805 user 0m0.083s 00:15:58.805 sys 0m0.102s 00:15:58.805 14:35:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:58.805 14:35:07 -- common/autotest_common.sh@10 -- # set +x 00:15:58.805 ************************************ 00:15:58.805 END TEST rpc_client 00:15:58.805 ************************************ 00:15:58.805 14:35:07 -- spdk/autotest.sh@167 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:58.805 14:35:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:58.805 14:35:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:58.805 14:35:07 -- common/autotest_common.sh@10 -- # set +x 00:15:58.805 ************************************ 00:15:58.805 START TEST json_config 00:15:58.805 ************************************ 00:15:58.805 14:35:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:58.805 14:35:07 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:58.805 14:35:07 -- nvmf/common.sh@7 -- # uname -s 00:15:58.805 14:35:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.805 14:35:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.805 14:35:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.805 14:35:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.805 14:35:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.805 14:35:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.805 14:35:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.805 14:35:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.805 14:35:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.805 14:35:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.805 14:35:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:441f5a21-7e81-426f-9e72-ee510eac3546 00:15:58.805 14:35:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=441f5a21-7e81-426f-9e72-ee510eac3546 00:15:58.805 14:35:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.805 14:35:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.805 14:35:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:58.805 14:35:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.805 14:35:07 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.805 14:35:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.805 14:35:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.805 14:35:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.805 14:35:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.805 14:35:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.805 14:35:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.805 14:35:07 -- paths/export.sh@5 -- # export PATH 00:15:58.805 14:35:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.805 14:35:07 -- nvmf/common.sh@47 -- # : 0 00:15:58.805 14:35:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.805 14:35:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.805 14:35:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.805 14:35:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.805 14:35:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.805 14:35:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.805 14:35:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.805 14:35:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.805 14:35:07 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:58.805 14:35:07 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:15:58.805 14:35:07 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:15:58.805 14:35:07 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:15:58.805 14:35:07 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:15:58.805 14:35:07 -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:15:58.805 WARNING: No tests are enabled so not running JSON configuration tests 00:15:58.805 14:35:07 -- json_config/json_config.sh@28 -- # exit 0 00:15:58.805 00:15:58.805 real 0m0.090s 00:15:58.805 user 0m0.042s 00:15:58.805 sys 0m0.042s 00:15:58.805 14:35:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:58.805 14:35:07 -- common/autotest_common.sh@10 -- # set +x 00:15:58.805 ************************************ 00:15:58.805 END TEST json_config 00:15:58.805 ************************************ 00:15:58.805 14:35:07 -- spdk/autotest.sh@168 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:58.805 14:35:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:58.805 14:35:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:58.805 14:35:07 -- common/autotest_common.sh@10 -- # set +x 00:15:59.064 ************************************ 00:15:59.064 START TEST json_config_extra_key 00:15:59.064 ************************************ 00:15:59.064 14:35:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.064 14:35:07 -- nvmf/common.sh@7 -- # uname -s 00:15:59.064 14:35:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.064 14:35:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.064 14:35:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.064 14:35:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.064 14:35:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.064 14:35:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.064 14:35:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.064 14:35:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.064 14:35:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.064 14:35:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.064 14:35:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:441f5a21-7e81-426f-9e72-ee510eac3546 00:15:59.064 14:35:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=441f5a21-7e81-426f-9e72-ee510eac3546 00:15:59.064 14:35:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.064 14:35:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.064 14:35:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:59.064 14:35:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.064 14:35:07 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.064 14:35:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.064 14:35:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.064 14:35:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.064 14:35:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.064 14:35:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.064 14:35:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.064 14:35:07 -- paths/export.sh@5 -- # export PATH 00:15:59.064 14:35:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.064 14:35:07 -- nvmf/common.sh@47 -- # : 0 00:15:59.064 14:35:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.064 14:35:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.064 14:35:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.064 14:35:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.064 14:35:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.064 14:35:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.064 14:35:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.064 14:35:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:15:59.064 INFO: launching applications... 00:15:59.064 14:35:07 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:59.064 14:35:07 -- json_config/common.sh@9 -- # local app=target 00:15:59.064 14:35:07 -- json_config/common.sh@10 -- # shift 00:15:59.064 14:35:07 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:59.064 14:35:07 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:59.064 14:35:07 -- json_config/common.sh@15 -- # local app_extra_params= 00:15:59.064 14:35:07 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:59.064 14:35:07 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:59.064 14:35:07 -- json_config/common.sh@22 -- # app_pid["$app"]=62300 00:15:59.064 14:35:07 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:59.064 14:35:07 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:59.064 Waiting for target to run... 00:15:59.064 14:35:07 -- json_config/common.sh@25 -- # waitforlisten 62300 /var/tmp/spdk_tgt.sock 00:15:59.064 14:35:07 -- common/autotest_common.sh@817 -- # '[' -z 62300 ']' 00:15:59.064 14:35:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:59.064 14:35:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:59.064 14:35:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:59.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:59.064 14:35:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:59.064 14:35:07 -- common/autotest_common.sh@10 -- # set +x 00:15:59.323 [2024-04-17 14:35:07.685767] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:15:59.323 [2024-04-17 14:35:07.685907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62300 ] 00:15:59.581 [2024-04-17 14:35:08.086213] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.840 [2024-04-17 14:35:08.408535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.213 00:16:01.213 INFO: shutting down applications... 00:16:01.213 14:35:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:01.213 14:35:09 -- common/autotest_common.sh@850 -- # return 0 00:16:01.213 14:35:09 -- json_config/common.sh@26 -- # echo '' 00:16:01.213 14:35:09 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:16:01.213 14:35:09 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:16:01.213 14:35:09 -- json_config/common.sh@31 -- # local app=target 00:16:01.213 14:35:09 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:16:01.213 14:35:09 -- json_config/common.sh@35 -- # [[ -n 62300 ]] 00:16:01.213 14:35:09 -- json_config/common.sh@38 -- # kill -SIGINT 62300 00:16:01.213 14:35:09 -- json_config/common.sh@40 -- # (( i = 0 )) 00:16:01.213 14:35:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:01.213 14:35:09 -- json_config/common.sh@41 -- # kill -0 62300 00:16:01.213 14:35:09 -- json_config/common.sh@45 -- # sleep 0.5 00:16:01.471 14:35:09 -- json_config/common.sh@40 -- # (( i++ )) 00:16:01.471 14:35:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:01.471 14:35:09 -- json_config/common.sh@41 -- # kill -0 62300 00:16:01.471 14:35:09 -- json_config/common.sh@45 -- # sleep 0.5 00:16:02.037 14:35:10 -- json_config/common.sh@40 -- # (( i++ )) 00:16:02.037 14:35:10 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:02.037 14:35:10 -- json_config/common.sh@41 -- # kill -0 62300 00:16:02.037 14:35:10 -- json_config/common.sh@45 -- # sleep 0.5 00:16:02.602 14:35:10 -- json_config/common.sh@40 -- # (( i++ )) 00:16:02.602 14:35:10 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:02.602 14:35:10 -- json_config/common.sh@41 -- # kill -0 62300 00:16:02.602 14:35:10 -- json_config/common.sh@45 -- # sleep 0.5 00:16:02.859 14:35:11 -- json_config/common.sh@40 -- # (( i++ )) 00:16:02.859 14:35:11 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:02.859 14:35:11 -- json_config/common.sh@41 -- # kill -0 62300 00:16:02.859 14:35:11 -- json_config/common.sh@45 -- # sleep 0.5 00:16:03.488 14:35:11 -- json_config/common.sh@40 -- # (( i++ )) 00:16:03.488 14:35:11 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:03.488 14:35:11 -- json_config/common.sh@41 -- # kill -0 62300 00:16:03.488 14:35:11 -- json_config/common.sh@45 -- # sleep 0.5 00:16:04.073 14:35:12 -- json_config/common.sh@40 -- # (( i++ )) 00:16:04.073 14:35:12 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:04.073 14:35:12 -- json_config/common.sh@41 -- # kill -0 62300 00:16:04.073 14:35:12 -- json_config/common.sh@45 -- # sleep 0.5 00:16:04.643 14:35:12 -- json_config/common.sh@40 -- # (( i++ )) 00:16:04.643 14:35:12 -- json_config/common.sh@40 -- # (( i < 30 )) 00:16:04.643 14:35:12 -- json_config/common.sh@41 -- # kill -0 62300 00:16:04.643 14:35:12 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:16:04.643 14:35:12 -- json_config/common.sh@43 -- # break 00:16:04.643 14:35:12 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:16:04.643 SPDK target shutdown done 00:16:04.643 14:35:12 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:16:04.643 Success 00:16:04.643 14:35:12 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:16:04.643 00:16:04.643 real 0m5.470s 00:16:04.643 user 0m5.056s 00:16:04.643 sys 0m0.592s 00:16:04.643 14:35:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:04.643 14:35:12 -- common/autotest_common.sh@10 -- # set +x 00:16:04.643 ************************************ 00:16:04.643 END TEST json_config_extra_key 00:16:04.643 ************************************ 00:16:04.643 14:35:12 -- spdk/autotest.sh@169 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:04.643 14:35:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:04.643 14:35:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:04.643 14:35:12 -- common/autotest_common.sh@10 -- # set +x 00:16:04.643 ************************************ 00:16:04.643 START TEST alias_rpc 00:16:04.643 ************************************ 00:16:04.643 14:35:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:16:04.643 * Looking for test storage... 00:16:04.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:16:04.643 14:35:13 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:04.643 14:35:13 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62415 00:16:04.643 14:35:13 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:04.643 14:35:13 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62415 00:16:04.643 14:35:13 -- common/autotest_common.sh@817 -- # '[' -z 62415 ']' 00:16:04.643 14:35:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.643 14:35:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:04.643 14:35:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.643 14:35:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:04.643 14:35:13 -- common/autotest_common.sh@10 -- # set +x 00:16:04.901 [2024-04-17 14:35:13.306820] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:04.902 [2024-04-17 14:35:13.307673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62415 ] 00:16:04.902 [2024-04-17 14:35:13.497314] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.469 [2024-04-17 14:35:13.851922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.408 14:35:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.408 14:35:14 -- common/autotest_common.sh@850 -- # return 0 00:16:06.408 14:35:14 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:16:06.666 14:35:15 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62415 00:16:06.666 14:35:15 -- common/autotest_common.sh@936 -- # '[' -z 62415 ']' 00:16:06.666 14:35:15 -- common/autotest_common.sh@940 -- # kill -0 62415 00:16:06.666 14:35:15 -- common/autotest_common.sh@941 -- # uname 00:16:06.666 14:35:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:06.666 14:35:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62415 00:16:06.924 14:35:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:06.924 14:35:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:06.924 14:35:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62415' 00:16:06.925 killing process with pid 62415 00:16:06.925 14:35:15 -- common/autotest_common.sh@955 -- # kill 62415 00:16:06.925 14:35:15 -- common/autotest_common.sh@960 -- # wait 62415 00:16:09.461 ************************************ 00:16:09.462 END TEST alias_rpc 00:16:09.462 ************************************ 00:16:09.462 00:16:09.462 real 0m4.980s 00:16:09.462 user 0m5.027s 00:16:09.462 sys 0m0.625s 00:16:09.462 14:35:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:09.462 14:35:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.721 14:35:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 0 ]] 00:16:09.721 14:35:18 -- spdk/autotest.sh@172 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:09.721 14:35:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:09.721 14:35:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:09.721 14:35:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.721 ************************************ 00:16:09.721 START TEST spdkcli_tcp 00:16:09.721 ************************************ 00:16:09.721 14:35:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:16:09.721 * Looking for test storage... 00:16:09.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:09.721 14:35:18 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:09.721 14:35:18 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:09.721 14:35:18 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:09.721 14:35:18 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:16:09.721 14:35:18 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:16:09.721 14:35:18 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:09.721 14:35:18 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:16:09.721 14:35:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:09.721 14:35:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.721 14:35:18 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62530 00:16:09.721 14:35:18 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:09.721 14:35:18 -- spdkcli/tcp.sh@27 -- # waitforlisten 62530 00:16:09.721 14:35:18 -- common/autotest_common.sh@817 -- # '[' -z 62530 ']' 00:16:09.721 14:35:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.721 14:35:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:09.721 14:35:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.721 14:35:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:09.721 14:35:18 -- common/autotest_common.sh@10 -- # set +x 00:16:09.980 [2024-04-17 14:35:18.419952] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:09.980 [2024-04-17 14:35:18.420358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62530 ] 00:16:10.237 [2024-04-17 14:35:18.609422] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:10.573 [2024-04-17 14:35:18.958607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.573 [2024-04-17 14:35:18.958636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.508 14:35:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:11.508 14:35:20 -- common/autotest_common.sh@850 -- # return 0 00:16:11.508 14:35:20 -- spdkcli/tcp.sh@31 -- # socat_pid=62558 00:16:11.508 14:35:20 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:16:11.508 14:35:20 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:16:11.766 [ 00:16:11.766 "bdev_malloc_delete", 00:16:11.766 "bdev_malloc_create", 00:16:11.766 "bdev_null_resize", 00:16:11.766 "bdev_null_delete", 00:16:11.766 "bdev_null_create", 00:16:11.766 "bdev_nvme_cuse_unregister", 00:16:11.766 "bdev_nvme_cuse_register", 00:16:11.766 "bdev_opal_new_user", 00:16:11.766 "bdev_opal_set_lock_state", 00:16:11.766 "bdev_opal_delete", 00:16:11.766 "bdev_opal_get_info", 00:16:11.766 "bdev_opal_create", 00:16:11.766 "bdev_nvme_opal_revert", 00:16:11.766 "bdev_nvme_opal_init", 00:16:11.766 "bdev_nvme_send_cmd", 00:16:11.766 "bdev_nvme_get_path_iostat", 00:16:11.766 "bdev_nvme_get_mdns_discovery_info", 00:16:11.766 "bdev_nvme_stop_mdns_discovery", 00:16:11.766 "bdev_nvme_start_mdns_discovery", 00:16:11.766 "bdev_nvme_set_multipath_policy", 00:16:11.766 "bdev_nvme_set_preferred_path", 00:16:11.766 "bdev_nvme_get_io_paths", 00:16:11.766 "bdev_nvme_remove_error_injection", 00:16:11.766 "bdev_nvme_add_error_injection", 00:16:11.766 "bdev_nvme_get_discovery_info", 00:16:11.766 "bdev_nvme_stop_discovery", 00:16:11.766 "bdev_nvme_start_discovery", 00:16:11.766 "bdev_nvme_get_controller_health_info", 00:16:11.766 "bdev_nvme_disable_controller", 00:16:11.766 "bdev_nvme_enable_controller", 00:16:11.766 "bdev_nvme_reset_controller", 00:16:11.766 "bdev_nvme_get_transport_statistics", 00:16:11.766 "bdev_nvme_apply_firmware", 00:16:11.766 "bdev_nvme_detach_controller", 00:16:11.766 "bdev_nvme_get_controllers", 00:16:11.766 "bdev_nvme_attach_controller", 00:16:11.766 "bdev_nvme_set_hotplug", 00:16:11.766 "bdev_nvme_set_options", 00:16:11.766 "bdev_passthru_delete", 00:16:11.766 "bdev_passthru_create", 00:16:11.766 "bdev_lvol_grow_lvstore", 00:16:11.766 "bdev_lvol_get_lvols", 00:16:11.766 "bdev_lvol_get_lvstores", 00:16:11.766 "bdev_lvol_delete", 00:16:11.766 "bdev_lvol_set_read_only", 00:16:11.766 "bdev_lvol_resize", 00:16:11.766 "bdev_lvol_decouple_parent", 00:16:11.766 "bdev_lvol_inflate", 00:16:11.766 "bdev_lvol_rename", 00:16:11.766 "bdev_lvol_clone_bdev", 00:16:11.766 "bdev_lvol_clone", 00:16:11.766 "bdev_lvol_snapshot", 00:16:11.766 "bdev_lvol_create", 00:16:11.766 "bdev_lvol_delete_lvstore", 00:16:11.766 "bdev_lvol_rename_lvstore", 00:16:11.766 "bdev_lvol_create_lvstore", 00:16:11.766 "bdev_raid_set_options", 00:16:11.766 "bdev_raid_remove_base_bdev", 00:16:11.766 "bdev_raid_add_base_bdev", 00:16:11.766 "bdev_raid_delete", 00:16:11.766 "bdev_raid_create", 00:16:11.766 "bdev_raid_get_bdevs", 00:16:11.766 "bdev_error_inject_error", 00:16:11.766 "bdev_error_delete", 00:16:11.766 "bdev_error_create", 00:16:11.766 "bdev_split_delete", 00:16:11.766 "bdev_split_create", 00:16:11.766 "bdev_delay_delete", 00:16:11.766 "bdev_delay_create", 00:16:11.766 "bdev_delay_update_latency", 00:16:11.766 "bdev_zone_block_delete", 00:16:11.766 "bdev_zone_block_create", 00:16:11.766 "blobfs_create", 00:16:11.766 "blobfs_detect", 00:16:11.766 "blobfs_set_cache_size", 00:16:11.766 "bdev_xnvme_delete", 00:16:11.766 "bdev_xnvme_create", 00:16:11.766 "bdev_aio_delete", 00:16:11.766 "bdev_aio_rescan", 00:16:11.766 "bdev_aio_create", 00:16:11.766 "bdev_ftl_set_property", 00:16:11.766 "bdev_ftl_get_properties", 00:16:11.766 "bdev_ftl_get_stats", 00:16:11.766 "bdev_ftl_unmap", 00:16:11.766 "bdev_ftl_unload", 00:16:11.766 "bdev_ftl_delete", 00:16:11.766 "bdev_ftl_load", 00:16:11.766 "bdev_ftl_create", 00:16:11.766 "bdev_virtio_attach_controller", 00:16:11.766 "bdev_virtio_scsi_get_devices", 00:16:11.766 "bdev_virtio_detach_controller", 00:16:11.766 "bdev_virtio_blk_set_hotplug", 00:16:11.766 "bdev_iscsi_delete", 00:16:11.766 "bdev_iscsi_create", 00:16:11.766 "bdev_iscsi_set_options", 00:16:11.766 "accel_error_inject_error", 00:16:11.766 "ioat_scan_accel_module", 00:16:11.766 "dsa_scan_accel_module", 00:16:11.766 "iaa_scan_accel_module", 00:16:11.766 "keyring_file_remove_key", 00:16:11.766 "keyring_file_add_key", 00:16:11.766 "iscsi_set_options", 00:16:11.766 "iscsi_get_auth_groups", 00:16:11.766 "iscsi_auth_group_remove_secret", 00:16:11.766 "iscsi_auth_group_add_secret", 00:16:11.766 "iscsi_delete_auth_group", 00:16:11.766 "iscsi_create_auth_group", 00:16:11.766 "iscsi_set_discovery_auth", 00:16:11.766 "iscsi_get_options", 00:16:11.766 "iscsi_target_node_request_logout", 00:16:11.766 "iscsi_target_node_set_redirect", 00:16:11.766 "iscsi_target_node_set_auth", 00:16:11.766 "iscsi_target_node_add_lun", 00:16:11.766 "iscsi_get_stats", 00:16:11.766 "iscsi_get_connections", 00:16:11.766 "iscsi_portal_group_set_auth", 00:16:11.766 "iscsi_start_portal_group", 00:16:11.766 "iscsi_delete_portal_group", 00:16:11.766 "iscsi_create_portal_group", 00:16:11.766 "iscsi_get_portal_groups", 00:16:11.766 "iscsi_delete_target_node", 00:16:11.766 "iscsi_target_node_remove_pg_ig_maps", 00:16:11.766 "iscsi_target_node_add_pg_ig_maps", 00:16:11.766 "iscsi_create_target_node", 00:16:11.766 "iscsi_get_target_nodes", 00:16:11.766 "iscsi_delete_initiator_group", 00:16:11.766 "iscsi_initiator_group_remove_initiators", 00:16:11.766 "iscsi_initiator_group_add_initiators", 00:16:11.766 "iscsi_create_initiator_group", 00:16:11.766 "iscsi_get_initiator_groups", 00:16:11.766 "nvmf_set_crdt", 00:16:11.766 "nvmf_set_config", 00:16:11.766 "nvmf_set_max_subsystems", 00:16:11.766 "nvmf_subsystem_get_listeners", 00:16:11.766 "nvmf_subsystem_get_qpairs", 00:16:11.766 "nvmf_subsystem_get_controllers", 00:16:11.766 "nvmf_get_stats", 00:16:11.766 "nvmf_get_transports", 00:16:11.766 "nvmf_create_transport", 00:16:11.766 "nvmf_get_targets", 00:16:11.767 "nvmf_delete_target", 00:16:11.767 "nvmf_create_target", 00:16:11.767 "nvmf_subsystem_allow_any_host", 00:16:11.767 "nvmf_subsystem_remove_host", 00:16:11.767 "nvmf_subsystem_add_host", 00:16:11.767 "nvmf_ns_remove_host", 00:16:11.767 "nvmf_ns_add_host", 00:16:11.767 "nvmf_subsystem_remove_ns", 00:16:11.767 "nvmf_subsystem_add_ns", 00:16:11.767 "nvmf_subsystem_listener_set_ana_state", 00:16:11.767 "nvmf_discovery_get_referrals", 00:16:11.767 "nvmf_discovery_remove_referral", 00:16:11.767 "nvmf_discovery_add_referral", 00:16:11.767 "nvmf_subsystem_remove_listener", 00:16:11.767 "nvmf_subsystem_add_listener", 00:16:11.767 "nvmf_delete_subsystem", 00:16:11.767 "nvmf_create_subsystem", 00:16:11.767 "nvmf_get_subsystems", 00:16:11.767 "env_dpdk_get_mem_stats", 00:16:11.767 "nbd_get_disks", 00:16:11.767 "nbd_stop_disk", 00:16:11.767 "nbd_start_disk", 00:16:11.767 "ublk_recover_disk", 00:16:11.767 "ublk_get_disks", 00:16:11.767 "ublk_stop_disk", 00:16:11.767 "ublk_start_disk", 00:16:11.767 "ublk_destroy_target", 00:16:11.767 "ublk_create_target", 00:16:11.767 "virtio_blk_create_transport", 00:16:11.767 "virtio_blk_get_transports", 00:16:11.767 "vhost_controller_set_coalescing", 00:16:11.767 "vhost_get_controllers", 00:16:11.767 "vhost_delete_controller", 00:16:11.767 "vhost_create_blk_controller", 00:16:11.767 "vhost_scsi_controller_remove_target", 00:16:11.767 "vhost_scsi_controller_add_target", 00:16:11.767 "vhost_start_scsi_controller", 00:16:11.767 "vhost_create_scsi_controller", 00:16:11.767 "thread_set_cpumask", 00:16:11.767 "framework_get_scheduler", 00:16:11.767 "framework_set_scheduler", 00:16:11.767 "framework_get_reactors", 00:16:11.767 "thread_get_io_channels", 00:16:11.767 "thread_get_pollers", 00:16:11.767 "thread_get_stats", 00:16:11.767 "framework_monitor_context_switch", 00:16:11.767 "spdk_kill_instance", 00:16:11.767 "log_enable_timestamps", 00:16:11.767 "log_get_flags", 00:16:11.767 "log_clear_flag", 00:16:11.767 "log_set_flag", 00:16:11.767 "log_get_level", 00:16:11.767 "log_set_level", 00:16:11.767 "log_get_print_level", 00:16:11.767 "log_set_print_level", 00:16:11.767 "framework_enable_cpumask_locks", 00:16:11.767 "framework_disable_cpumask_locks", 00:16:11.767 "framework_wait_init", 00:16:11.767 "framework_start_init", 00:16:11.767 "scsi_get_devices", 00:16:11.767 "bdev_get_histogram", 00:16:11.767 "bdev_enable_histogram", 00:16:11.767 "bdev_set_qos_limit", 00:16:11.767 "bdev_set_qd_sampling_period", 00:16:11.767 "bdev_get_bdevs", 00:16:11.767 "bdev_reset_iostat", 00:16:11.767 "bdev_get_iostat", 00:16:11.767 "bdev_examine", 00:16:11.767 "bdev_wait_for_examine", 00:16:11.767 "bdev_set_options", 00:16:11.767 "notify_get_notifications", 00:16:11.767 "notify_get_types", 00:16:11.767 "accel_get_stats", 00:16:11.767 "accel_set_options", 00:16:11.767 "accel_set_driver", 00:16:11.767 "accel_crypto_key_destroy", 00:16:11.767 "accel_crypto_keys_get", 00:16:11.767 "accel_crypto_key_create", 00:16:11.767 "accel_assign_opc", 00:16:11.767 "accel_get_module_info", 00:16:11.767 "accel_get_opc_assignments", 00:16:11.767 "vmd_rescan", 00:16:11.767 "vmd_remove_device", 00:16:11.767 "vmd_enable", 00:16:11.767 "sock_set_default_impl", 00:16:11.767 "sock_impl_set_options", 00:16:11.767 "sock_impl_get_options", 00:16:11.767 "iobuf_get_stats", 00:16:11.767 "iobuf_set_options", 00:16:11.767 "framework_get_pci_devices", 00:16:11.767 "framework_get_config", 00:16:11.767 "framework_get_subsystems", 00:16:11.767 "trace_get_info", 00:16:11.767 "trace_get_tpoint_group_mask", 00:16:11.767 "trace_disable_tpoint_group", 00:16:11.767 "trace_enable_tpoint_group", 00:16:11.767 "trace_clear_tpoint_mask", 00:16:11.767 "trace_set_tpoint_mask", 00:16:11.767 "keyring_get_keys", 00:16:11.767 "spdk_get_version", 00:16:11.767 "rpc_get_methods" 00:16:11.767 ] 00:16:11.767 14:35:20 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:16:11.767 14:35:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:11.767 14:35:20 -- common/autotest_common.sh@10 -- # set +x 00:16:12.026 14:35:20 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:12.026 14:35:20 -- spdkcli/tcp.sh@38 -- # killprocess 62530 00:16:12.026 14:35:20 -- common/autotest_common.sh@936 -- # '[' -z 62530 ']' 00:16:12.026 14:35:20 -- common/autotest_common.sh@940 -- # kill -0 62530 00:16:12.026 14:35:20 -- common/autotest_common.sh@941 -- # uname 00:16:12.026 14:35:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:12.026 14:35:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62530 00:16:12.026 killing process with pid 62530 00:16:12.026 14:35:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:12.026 14:35:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:12.026 14:35:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62530' 00:16:12.026 14:35:20 -- common/autotest_common.sh@955 -- # kill 62530 00:16:12.026 14:35:20 -- common/autotest_common.sh@960 -- # wait 62530 00:16:14.573 ************************************ 00:16:14.573 END TEST spdkcli_tcp 00:16:14.573 ************************************ 00:16:14.573 00:16:14.573 real 0m4.949s 00:16:14.573 user 0m8.701s 00:16:14.573 sys 0m0.639s 00:16:14.573 14:35:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:14.573 14:35:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.573 14:35:23 -- spdk/autotest.sh@175 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:14.573 14:35:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:14.573 14:35:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:14.574 14:35:23 -- common/autotest_common.sh@10 -- # set +x 00:16:14.831 ************************************ 00:16:14.831 START TEST dpdk_mem_utility 00:16:14.831 ************************************ 00:16:14.831 14:35:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:16:14.831 * Looking for test storage... 00:16:14.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:16:14.831 14:35:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:14.831 14:35:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62660 00:16:14.831 14:35:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:14.831 14:35:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62660 00:16:14.831 14:35:23 -- common/autotest_common.sh@817 -- # '[' -z 62660 ']' 00:16:14.831 14:35:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.831 14:35:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:14.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.831 14:35:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.831 14:35:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:14.831 14:35:23 -- common/autotest_common.sh@10 -- # set +x 00:16:15.091 [2024-04-17 14:35:23.452779] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:15.091 [2024-04-17 14:35:23.453759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62660 ] 00:16:15.091 [2024-04-17 14:35:23.630438] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.658 [2024-04-17 14:35:23.956906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.591 14:35:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:16.591 14:35:25 -- common/autotest_common.sh@850 -- # return 0 00:16:16.591 14:35:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:16:16.591 14:35:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:16:16.591 14:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.591 14:35:25 -- common/autotest_common.sh@10 -- # set +x 00:16:16.591 { 00:16:16.591 "filename": "/tmp/spdk_mem_dump.txt" 00:16:16.591 } 00:16:16.591 14:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.591 14:35:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:16:16.591 DPDK memory size 820.000000 MiB in 1 heap(s) 00:16:16.591 1 heaps totaling size 820.000000 MiB 00:16:16.591 size: 820.000000 MiB heap id: 0 00:16:16.591 end heaps---------- 00:16:16.591 8 mempools totaling size 598.116089 MiB 00:16:16.591 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:16:16.591 size: 158.602051 MiB name: PDU_data_out_Pool 00:16:16.591 size: 84.521057 MiB name: bdev_io_62660 00:16:16.591 size: 51.011292 MiB name: evtpool_62660 00:16:16.591 size: 50.003479 MiB name: msgpool_62660 00:16:16.591 size: 21.763794 MiB name: PDU_Pool 00:16:16.591 size: 19.513306 MiB name: SCSI_TASK_Pool 00:16:16.591 size: 0.026123 MiB name: Session_Pool 00:16:16.591 end mempools------- 00:16:16.591 6 memzones totaling size 4.142822 MiB 00:16:16.591 size: 1.000366 MiB name: RG_ring_0_62660 00:16:16.591 size: 1.000366 MiB name: RG_ring_1_62660 00:16:16.591 size: 1.000366 MiB name: RG_ring_4_62660 00:16:16.591 size: 1.000366 MiB name: RG_ring_5_62660 00:16:16.591 size: 0.125366 MiB name: RG_ring_2_62660 00:16:16.591 size: 0.015991 MiB name: RG_ring_3_62660 00:16:16.591 end memzones------- 00:16:16.591 14:35:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:16:16.591 heap id: 0 total size: 820.000000 MiB number of busy elements: 222 number of free elements: 18 00:16:16.591 list of free elements. size: 18.470703 MiB 00:16:16.591 element at address: 0x200000400000 with size: 1.999451 MiB 00:16:16.591 element at address: 0x200000800000 with size: 1.996887 MiB 00:16:16.591 element at address: 0x200007000000 with size: 1.995972 MiB 00:16:16.591 element at address: 0x20000b200000 with size: 1.995972 MiB 00:16:16.591 element at address: 0x200019100040 with size: 0.999939 MiB 00:16:16.591 element at address: 0x200019500040 with size: 0.999939 MiB 00:16:16.591 element at address: 0x200019600000 with size: 0.999329 MiB 00:16:16.591 element at address: 0x200003e00000 with size: 0.996094 MiB 00:16:16.591 element at address: 0x200032200000 with size: 0.994324 MiB 00:16:16.591 element at address: 0x200018e00000 with size: 0.959656 MiB 00:16:16.591 element at address: 0x200019900040 with size: 0.937256 MiB 00:16:16.591 element at address: 0x200000200000 with size: 0.835083 MiB 00:16:16.591 element at address: 0x20001b000000 with size: 0.561218 MiB 00:16:16.591 element at address: 0x200019200000 with size: 0.489197 MiB 00:16:16.591 element at address: 0x200019a00000 with size: 0.485413 MiB 00:16:16.591 element at address: 0x200013800000 with size: 0.469116 MiB 00:16:16.591 element at address: 0x200028400000 with size: 0.399719 MiB 00:16:16.591 element at address: 0x200003a00000 with size: 0.356140 MiB 00:16:16.591 list of standard malloc elements. size: 199.264893 MiB 00:16:16.591 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:16:16.591 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:16:16.591 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:16:16.591 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:16:16.591 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:16:16.591 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:16:16.591 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:16:16.591 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:16:16.591 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:16:16.591 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:16:16.591 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:16:16.591 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:16:16.591 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:16:16.591 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:16:16.591 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:16:16.591 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:16:16.591 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:16:16.591 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:16:16.591 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200003aff980 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200003affa80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200003eff000 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200013878180 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200013878280 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200013878380 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200013878480 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200013878580 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200019abc680 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200028466540 with size: 0.000244 MiB 00:16:16.592 element at address: 0x200028466640 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846d300 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846d580 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846d680 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846d780 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846d880 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846d980 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846da80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846db80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846de80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846df80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846e080 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846e180 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846e280 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846e380 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846e480 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846e580 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846e680 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846e780 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846e880 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846e980 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846f080 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846f180 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846f280 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846f380 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846f480 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846f580 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846f680 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846f780 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846f880 with size: 0.000244 MiB 00:16:16.592 element at address: 0x20002846f980 with size: 0.000244 MiB 00:16:16.593 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:16:16.593 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:16:16.593 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:16:16.593 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:16:16.593 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:16:16.593 list of memzone associated elements. size: 602.264404 MiB 00:16:16.593 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:16:16.593 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:16:16.593 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:16:16.593 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:16:16.593 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:16:16.593 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62660_0 00:16:16.593 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:16:16.593 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62660_0 00:16:16.593 element at address: 0x200003fff340 with size: 48.003113 MiB 00:16:16.593 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62660_0 00:16:16.593 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:16:16.593 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:16:16.593 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:16:16.593 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:16:16.593 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:16:16.593 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62660 00:16:16.593 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:16:16.593 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62660 00:16:16.593 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:16:16.593 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62660 00:16:16.593 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:16:16.593 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:16:16.593 element at address: 0x200019abc780 with size: 1.008179 MiB 00:16:16.593 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:16:16.593 element at address: 0x200018efde00 with size: 1.008179 MiB 00:16:16.593 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:16:16.593 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:16:16.593 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:16:16.593 element at address: 0x200003eff100 with size: 1.000549 MiB 00:16:16.593 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62660 00:16:16.593 element at address: 0x200003affb80 with size: 1.000549 MiB 00:16:16.593 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62660 00:16:16.593 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:16:16.593 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62660 00:16:16.593 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:16:16.593 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62660 00:16:16.593 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:16:16.593 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62660 00:16:16.593 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:16:16.593 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:16:16.593 element at address: 0x200013878680 with size: 0.500549 MiB 00:16:16.593 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:16:16.593 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:16:16.593 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:16:16.593 element at address: 0x200003adf740 with size: 0.125549 MiB 00:16:16.593 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62660 00:16:16.593 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:16:16.593 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:16:16.593 element at address: 0x200028466740 with size: 0.023804 MiB 00:16:16.593 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:16:16.593 element at address: 0x200003adb500 with size: 0.016174 MiB 00:16:16.593 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62660 00:16:16.593 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:16:16.593 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:16:16.593 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:16:16.593 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62660 00:16:16.593 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:16:16.593 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62660 00:16:16.593 element at address: 0x20002846d400 with size: 0.000366 MiB 00:16:16.593 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:16:16.593 14:35:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:16:16.593 14:35:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62660 00:16:16.593 14:35:25 -- common/autotest_common.sh@936 -- # '[' -z 62660 ']' 00:16:16.593 14:35:25 -- common/autotest_common.sh@940 -- # kill -0 62660 00:16:16.593 14:35:25 -- common/autotest_common.sh@941 -- # uname 00:16:16.593 14:35:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.850 14:35:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62660 00:16:16.850 14:35:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:16.850 14:35:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:16.850 14:35:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62660' 00:16:16.851 killing process with pid 62660 00:16:16.851 14:35:25 -- common/autotest_common.sh@955 -- # kill 62660 00:16:16.851 14:35:25 -- common/autotest_common.sh@960 -- # wait 62660 00:16:19.377 00:16:19.377 real 0m4.605s 00:16:19.377 user 0m4.615s 00:16:19.377 sys 0m0.589s 00:16:19.377 14:35:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.377 ************************************ 00:16:19.377 14:35:27 -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 END TEST dpdk_mem_utility 00:16:19.377 ************************************ 00:16:19.377 14:35:27 -- spdk/autotest.sh@176 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:19.377 14:35:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:19.377 14:35:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.377 14:35:27 -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 ************************************ 00:16:19.377 START TEST event 00:16:19.377 ************************************ 00:16:19.377 14:35:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:16:19.672 * Looking for test storage... 00:16:19.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:19.672 14:35:28 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:19.672 14:35:28 -- bdev/nbd_common.sh@6 -- # set -e 00:16:19.672 14:35:28 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:19.672 14:35:28 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:19.672 14:35:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.672 14:35:28 -- common/autotest_common.sh@10 -- # set +x 00:16:19.672 ************************************ 00:16:19.672 START TEST event_perf 00:16:19.672 ************************************ 00:16:19.672 14:35:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:16:19.673 Running I/O for 1 seconds...[2024-04-17 14:35:28.200204] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:19.673 [2024-04-17 14:35:28.200556] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62781 ] 00:16:19.943 [2024-04-17 14:35:28.385786] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.202 [2024-04-17 14:35:28.708601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.202 [2024-04-17 14:35:28.708662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.202 [2024-04-17 14:35:28.708706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.202 [2024-04-17 14:35:28.708728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.573 Running I/O for 1 seconds... 00:16:21.573 lcore 0: 166878 00:16:21.573 lcore 1: 166879 00:16:21.573 lcore 2: 166880 00:16:21.573 lcore 3: 166880 00:16:21.831 done. 00:16:21.831 00:16:21.831 real 0m2.056s 00:16:21.831 user 0m4.759s 00:16:21.831 sys 0m0.168s 00:16:21.831 14:35:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:21.831 ************************************ 00:16:21.831 END TEST event_perf 00:16:21.831 ************************************ 00:16:21.831 14:35:30 -- common/autotest_common.sh@10 -- # set +x 00:16:21.831 14:35:30 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:21.831 14:35:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:21.831 14:35:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.831 14:35:30 -- common/autotest_common.sh@10 -- # set +x 00:16:21.831 ************************************ 00:16:21.831 START TEST event_reactor 00:16:21.831 ************************************ 00:16:21.831 14:35:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:16:21.831 [2024-04-17 14:35:30.374431] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:21.831 [2024-04-17 14:35:30.374839] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62831 ] 00:16:22.090 [2024-04-17 14:35:30.548212] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.349 [2024-04-17 14:35:30.864575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.765 test_start 00:16:23.765 oneshot 00:16:23.765 tick 100 00:16:23.765 tick 100 00:16:23.765 tick 250 00:16:23.765 tick 100 00:16:23.765 tick 100 00:16:23.765 tick 250 00:16:23.765 tick 500 00:16:23.765 tick 100 00:16:23.765 tick 100 00:16:23.765 tick 100 00:16:23.765 tick 250 00:16:23.765 tick 100 00:16:23.765 tick 100 00:16:23.765 test_end 00:16:23.765 00:16:23.765 real 0m2.025s 00:16:23.765 user 0m1.793s 00:16:23.765 sys 0m0.118s 00:16:23.765 14:35:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:23.765 ************************************ 00:16:23.765 END TEST event_reactor 00:16:23.765 ************************************ 00:16:23.765 14:35:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.022 14:35:32 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:24.022 14:35:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:24.022 14:35:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.022 14:35:32 -- common/autotest_common.sh@10 -- # set +x 00:16:24.022 ************************************ 00:16:24.022 START TEST event_reactor_perf 00:16:24.022 ************************************ 00:16:24.022 14:35:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:24.022 [2024-04-17 14:35:32.528004] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:24.023 [2024-04-17 14:35:32.528742] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62877 ] 00:16:24.280 [2024-04-17 14:35:32.719806] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.538 [2024-04-17 14:35:32.971056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.912 test_start 00:16:25.912 test_end 00:16:25.912 Performance: 312920 events per second 00:16:25.912 00:16:25.912 real 0m2.011s 00:16:25.912 user 0m1.754s 00:16:25.912 sys 0m0.140s 00:16:25.912 14:35:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.912 ************************************ 00:16:25.912 END TEST event_reactor_perf 00:16:25.912 ************************************ 00:16:25.912 14:35:34 -- common/autotest_common.sh@10 -- # set +x 00:16:26.169 14:35:34 -- event/event.sh@49 -- # uname -s 00:16:26.169 14:35:34 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:16:26.169 14:35:34 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:26.169 14:35:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:26.169 14:35:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:26.169 14:35:34 -- common/autotest_common.sh@10 -- # set +x 00:16:26.169 ************************************ 00:16:26.169 START TEST event_scheduler 00:16:26.170 ************************************ 00:16:26.170 14:35:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:26.170 * Looking for test storage... 00:16:26.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:16:26.170 14:35:34 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:16:26.170 14:35:34 -- scheduler/scheduler.sh@35 -- # scheduler_pid=62956 00:16:26.170 14:35:34 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:16:26.170 14:35:34 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:16:26.170 14:35:34 -- scheduler/scheduler.sh@37 -- # waitforlisten 62956 00:16:26.170 14:35:34 -- common/autotest_common.sh@817 -- # '[' -z 62956 ']' 00:16:26.170 14:35:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.170 14:35:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:26.170 14:35:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.170 14:35:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:26.170 14:35:34 -- common/autotest_common.sh@10 -- # set +x 00:16:26.428 [2024-04-17 14:35:34.818400] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:26.428 [2024-04-17 14:35:34.819263] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62956 ] 00:16:26.428 [2024-04-17 14:35:34.993987] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.993 [2024-04-17 14:35:35.349159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.993 [2024-04-17 14:35:35.349516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.993 [2024-04-17 14:35:35.349558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.993 [2024-04-17 14:35:35.349319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.283 14:35:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:27.283 14:35:35 -- common/autotest_common.sh@850 -- # return 0 00:16:27.283 14:35:35 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:16:27.283 14:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.283 14:35:35 -- common/autotest_common.sh@10 -- # set +x 00:16:27.283 POWER: Env isn't set yet! 00:16:27.283 POWER: Attempting to initialise ACPI cpufreq power management... 00:16:27.283 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:27.283 POWER: Cannot set governor of lcore 0 to userspace 00:16:27.283 POWER: Attempting to initialise PSTAT power management... 00:16:27.283 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:27.283 POWER: Cannot set governor of lcore 0 to performance 00:16:27.283 POWER: Attempting to initialise AMD PSTATE power management... 00:16:27.283 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:27.283 POWER: Cannot set governor of lcore 0 to userspace 00:16:27.283 POWER: Attempting to initialise CPPC power management... 00:16:27.283 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:27.283 POWER: Cannot set governor of lcore 0 to userspace 00:16:27.283 POWER: Attempting to initialise VM power management... 00:16:27.283 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:16:27.283 POWER: Unable to set Power Management Environment for lcore 0 00:16:27.283 [2024-04-17 14:35:35.768344] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:16:27.283 [2024-04-17 14:35:35.768441] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:16:27.283 [2024-04-17 14:35:35.768485] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:16:27.283 14:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.283 14:35:35 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:16:27.283 14:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.283 14:35:35 -- common/autotest_common.sh@10 -- # set +x 00:16:27.849 [2024-04-17 14:35:36.188832] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:16:27.849 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.849 14:35:36 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:16:27.849 14:35:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:27.849 14:35:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 ************************************ 00:16:27.850 START TEST scheduler_create_thread 00:16:27.850 ************************************ 00:16:27.850 14:35:36 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 2 00:16:27.850 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 3 00:16:27.850 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 4 00:16:27.850 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 5 00:16:27.850 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 6 00:16:27.850 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 7 00:16:27.850 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 8 00:16:27.850 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 9 00:16:27.850 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 10 00:16:27.850 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 14:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.850 14:35:36 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:16:27.850 14:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.850 14:35:36 -- common/autotest_common.sh@10 -- # set +x 00:16:28.783 14:35:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.783 14:35:37 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:16:28.783 14:35:37 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:16:28.783 14:35:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.783 14:35:37 -- common/autotest_common.sh@10 -- # set +x 00:16:30.158 14:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.158 00:16:30.158 real 0m2.138s 00:16:30.158 user 0m0.011s 00:16:30.158 sys 0m0.008s 00:16:30.158 ************************************ 00:16:30.158 END TEST scheduler_create_thread 00:16:30.158 ************************************ 00:16:30.158 14:35:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:30.158 14:35:38 -- common/autotest_common.sh@10 -- # set +x 00:16:30.158 14:35:38 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:30.158 14:35:38 -- scheduler/scheduler.sh@46 -- # killprocess 62956 00:16:30.158 14:35:38 -- common/autotest_common.sh@936 -- # '[' -z 62956 ']' 00:16:30.158 14:35:38 -- common/autotest_common.sh@940 -- # kill -0 62956 00:16:30.158 14:35:38 -- common/autotest_common.sh@941 -- # uname 00:16:30.158 14:35:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:30.158 14:35:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62956 00:16:30.158 14:35:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:30.158 killing process with pid 62956 00:16:30.158 14:35:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:30.158 14:35:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62956' 00:16:30.158 14:35:38 -- common/autotest_common.sh@955 -- # kill 62956 00:16:30.158 14:35:38 -- common/autotest_common.sh@960 -- # wait 62956 00:16:30.416 [2024-04-17 14:35:38.884554] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:16:31.790 00:16:31.790 real 0m5.724s 00:16:31.790 user 0m9.472s 00:16:31.790 sys 0m0.588s 00:16:31.790 14:35:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.790 ************************************ 00:16:31.790 END TEST event_scheduler 00:16:31.790 ************************************ 00:16:31.790 14:35:40 -- common/autotest_common.sh@10 -- # set +x 00:16:32.048 14:35:40 -- event/event.sh@51 -- # modprobe -n nbd 00:16:32.048 14:35:40 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:16:32.048 14:35:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:32.048 14:35:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:32.048 14:35:40 -- common/autotest_common.sh@10 -- # set +x 00:16:32.048 ************************************ 00:16:32.048 START TEST app_repeat 00:16:32.048 ************************************ 00:16:32.048 14:35:40 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:16:32.048 14:35:40 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:32.048 14:35:40 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:32.048 14:35:40 -- event/event.sh@13 -- # local nbd_list 00:16:32.048 14:35:40 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:32.048 14:35:40 -- event/event.sh@14 -- # local bdev_list 00:16:32.048 14:35:40 -- event/event.sh@15 -- # local repeat_times=4 00:16:32.048 14:35:40 -- event/event.sh@17 -- # modprobe nbd 00:16:32.048 14:35:40 -- event/event.sh@19 -- # repeat_pid=63071 00:16:32.048 14:35:40 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:16:32.048 14:35:40 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:16:32.048 14:35:40 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63071' 00:16:32.048 Process app_repeat pid: 63071 00:16:32.048 14:35:40 -- event/event.sh@23 -- # for i in {0..2} 00:16:32.048 14:35:40 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:16:32.048 spdk_app_start Round 0 00:16:32.048 14:35:40 -- event/event.sh@25 -- # waitforlisten 63071 /var/tmp/spdk-nbd.sock 00:16:32.048 14:35:40 -- common/autotest_common.sh@817 -- # '[' -z 63071 ']' 00:16:32.048 14:35:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:32.048 14:35:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:32.048 14:35:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:32.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:32.048 14:35:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:32.048 14:35:40 -- common/autotest_common.sh@10 -- # set +x 00:16:32.048 [2024-04-17 14:35:40.552382] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:32.048 [2024-04-17 14:35:40.552700] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63071 ] 00:16:32.306 [2024-04-17 14:35:40.720871] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:32.564 [2024-04-17 14:35:40.992430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.564 [2024-04-17 14:35:40.992443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.136 14:35:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:33.136 14:35:41 -- common/autotest_common.sh@850 -- # return 0 00:16:33.136 14:35:41 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:33.395 Malloc0 00:16:33.395 14:35:41 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:33.654 Malloc1 00:16:33.914 14:35:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@12 -- # local i 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:33.914 /dev/nbd0 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:33.914 14:35:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:33.914 14:35:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:33.914 14:35:42 -- common/autotest_common.sh@855 -- # local i 00:16:33.914 14:35:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:33.914 14:35:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:33.914 14:35:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:34.174 14:35:42 -- common/autotest_common.sh@859 -- # break 00:16:34.174 14:35:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:34.174 14:35:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:34.174 14:35:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:34.174 1+0 records in 00:16:34.174 1+0 records out 00:16:34.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585179 s, 7.0 MB/s 00:16:34.174 14:35:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:34.174 14:35:42 -- common/autotest_common.sh@872 -- # size=4096 00:16:34.174 14:35:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:34.174 14:35:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:34.174 14:35:42 -- common/autotest_common.sh@875 -- # return 0 00:16:34.174 14:35:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.174 14:35:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.174 14:35:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:34.174 /dev/nbd1 00:16:34.457 14:35:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:34.457 14:35:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:34.457 14:35:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:16:34.457 14:35:42 -- common/autotest_common.sh@855 -- # local i 00:16:34.457 14:35:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:34.457 14:35:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:34.457 14:35:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:16:34.458 14:35:42 -- common/autotest_common.sh@859 -- # break 00:16:34.458 14:35:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:34.458 14:35:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:34.458 14:35:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:34.458 1+0 records in 00:16:34.458 1+0 records out 00:16:34.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459339 s, 8.9 MB/s 00:16:34.458 14:35:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:34.458 14:35:42 -- common/autotest_common.sh@872 -- # size=4096 00:16:34.458 14:35:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:34.458 14:35:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:34.458 14:35:42 -- common/autotest_common.sh@875 -- # return 0 00:16:34.458 14:35:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.458 14:35:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:34.458 14:35:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:34.458 14:35:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:34.458 14:35:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:34.731 { 00:16:34.731 "nbd_device": "/dev/nbd0", 00:16:34.731 "bdev_name": "Malloc0" 00:16:34.731 }, 00:16:34.731 { 00:16:34.731 "nbd_device": "/dev/nbd1", 00:16:34.731 "bdev_name": "Malloc1" 00:16:34.731 } 00:16:34.731 ]' 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:34.731 { 00:16:34.731 "nbd_device": "/dev/nbd0", 00:16:34.731 "bdev_name": "Malloc0" 00:16:34.731 }, 00:16:34.731 { 00:16:34.731 "nbd_device": "/dev/nbd1", 00:16:34.731 "bdev_name": "Malloc1" 00:16:34.731 } 00:16:34.731 ]' 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:34.731 /dev/nbd1' 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:34.731 /dev/nbd1' 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@65 -- # count=2 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@95 -- # count=2 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:34.731 256+0 records in 00:16:34.731 256+0 records out 00:16:34.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107114 s, 97.9 MB/s 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:34.731 256+0 records in 00:16:34.731 256+0 records out 00:16:34.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0397607 s, 26.4 MB/s 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:34.731 256+0 records in 00:16:34.731 256+0 records out 00:16:34.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0470856 s, 22.3 MB/s 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@51 -- # local i 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.731 14:35:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:35.318 14:35:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:35.318 14:35:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:35.318 14:35:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:35.318 14:35:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.318 14:35:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.318 14:35:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:35.318 14:35:43 -- bdev/nbd_common.sh@41 -- # break 00:16:35.318 14:35:43 -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.318 14:35:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.318 14:35:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:35.589 14:35:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:35.589 14:35:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:35.589 14:35:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:35.589 14:35:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.589 14:35:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.589 14:35:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:35.589 14:35:43 -- bdev/nbd_common.sh@41 -- # break 00:16:35.589 14:35:43 -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.589 14:35:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:35.589 14:35:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:35.589 14:35:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@65 -- # true 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@65 -- # count=0 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@104 -- # count=0 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:35.870 14:35:44 -- bdev/nbd_common.sh@109 -- # return 0 00:16:35.870 14:35:44 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:36.454 14:35:44 -- event/event.sh@35 -- # sleep 3 00:16:37.829 [2024-04-17 14:35:46.221617] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:38.086 [2024-04-17 14:35:46.473651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.087 [2024-04-17 14:35:46.473656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.345 [2024-04-17 14:35:46.723799] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:38.345 [2024-04-17 14:35:46.724111] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:39.281 14:35:47 -- event/event.sh@23 -- # for i in {0..2} 00:16:39.281 spdk_app_start Round 1 00:16:39.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:39.281 14:35:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:16:39.282 14:35:47 -- event/event.sh@25 -- # waitforlisten 63071 /var/tmp/spdk-nbd.sock 00:16:39.282 14:35:47 -- common/autotest_common.sh@817 -- # '[' -z 63071 ']' 00:16:39.282 14:35:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:39.282 14:35:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:39.282 14:35:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:39.282 14:35:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:39.282 14:35:47 -- common/autotest_common.sh@10 -- # set +x 00:16:39.541 14:35:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:39.541 14:35:48 -- common/autotest_common.sh@850 -- # return 0 00:16:39.541 14:35:48 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:39.800 Malloc0 00:16:39.800 14:35:48 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:40.060 Malloc1 00:16:40.060 14:35:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:40.060 14:35:48 -- bdev/nbd_common.sh@12 -- # local i 00:16:40.061 14:35:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:40.061 14:35:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:40.061 14:35:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:40.321 /dev/nbd0 00:16:40.321 14:35:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:40.321 14:35:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:40.321 14:35:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:40.321 14:35:48 -- common/autotest_common.sh@855 -- # local i 00:16:40.321 14:35:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:40.321 14:35:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:40.321 14:35:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:40.321 14:35:48 -- common/autotest_common.sh@859 -- # break 00:16:40.321 14:35:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:40.321 14:35:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:40.321 14:35:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:40.321 1+0 records in 00:16:40.321 1+0 records out 00:16:40.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036618 s, 11.2 MB/s 00:16:40.322 14:35:48 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:40.322 14:35:48 -- common/autotest_common.sh@872 -- # size=4096 00:16:40.322 14:35:48 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:40.322 14:35:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:40.322 14:35:48 -- common/autotest_common.sh@875 -- # return 0 00:16:40.322 14:35:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.322 14:35:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:40.322 14:35:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:40.581 /dev/nbd1 00:16:40.581 14:35:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:40.581 14:35:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:40.581 14:35:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:16:40.581 14:35:49 -- common/autotest_common.sh@855 -- # local i 00:16:40.581 14:35:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:40.581 14:35:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:40.581 14:35:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:16:40.581 14:35:49 -- common/autotest_common.sh@859 -- # break 00:16:40.581 14:35:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:40.581 14:35:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:40.581 14:35:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:40.581 1+0 records in 00:16:40.581 1+0 records out 00:16:40.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415466 s, 9.9 MB/s 00:16:40.581 14:35:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:40.581 14:35:49 -- common/autotest_common.sh@872 -- # size=4096 00:16:40.581 14:35:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:40.581 14:35:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:40.581 14:35:49 -- common/autotest_common.sh@875 -- # return 0 00:16:40.581 14:35:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.581 14:35:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:40.581 14:35:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:40.581 14:35:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:40.581 14:35:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:41.242 { 00:16:41.242 "nbd_device": "/dev/nbd0", 00:16:41.242 "bdev_name": "Malloc0" 00:16:41.242 }, 00:16:41.242 { 00:16:41.242 "nbd_device": "/dev/nbd1", 00:16:41.242 "bdev_name": "Malloc1" 00:16:41.242 } 00:16:41.242 ]' 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:41.242 { 00:16:41.242 "nbd_device": "/dev/nbd0", 00:16:41.242 "bdev_name": "Malloc0" 00:16:41.242 }, 00:16:41.242 { 00:16:41.242 "nbd_device": "/dev/nbd1", 00:16:41.242 "bdev_name": "Malloc1" 00:16:41.242 } 00:16:41.242 ]' 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:41.242 /dev/nbd1' 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:41.242 /dev/nbd1' 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@65 -- # count=2 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@66 -- # echo 2 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@95 -- # count=2 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:41.242 256+0 records in 00:16:41.242 256+0 records out 00:16:41.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00717446 s, 146 MB/s 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:41.242 256+0 records in 00:16:41.242 256+0 records out 00:16:41.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329202 s, 31.9 MB/s 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:41.242 256+0 records in 00:16:41.242 256+0 records out 00:16:41.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323324 s, 32.4 MB/s 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@51 -- # local i 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.242 14:35:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:41.503 14:35:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:41.503 14:35:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:41.503 14:35:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:41.503 14:35:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.503 14:35:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.503 14:35:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:41.503 14:35:49 -- bdev/nbd_common.sh@41 -- # break 00:16:41.503 14:35:49 -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.503 14:35:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.503 14:35:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:41.762 14:35:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:41.762 14:35:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:41.762 14:35:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:41.762 14:35:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.762 14:35:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.762 14:35:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:41.762 14:35:50 -- bdev/nbd_common.sh@41 -- # break 00:16:41.762 14:35:50 -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.762 14:35:50 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:41.762 14:35:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:41.762 14:35:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@65 -- # true 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@65 -- # count=0 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@104 -- # count=0 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:42.021 14:35:50 -- bdev/nbd_common.sh@109 -- # return 0 00:16:42.021 14:35:50 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:42.587 14:35:50 -- event/event.sh@35 -- # sleep 3 00:16:43.981 [2024-04-17 14:35:52.480187] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:44.240 [2024-04-17 14:35:52.739023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.240 [2024-04-17 14:35:52.739040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.500 [2024-04-17 14:35:53.009540] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:44.500 [2024-04-17 14:35:53.010946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:45.436 spdk_app_start Round 2 00:16:45.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:45.436 14:35:53 -- event/event.sh@23 -- # for i in {0..2} 00:16:45.436 14:35:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:16:45.436 14:35:53 -- event/event.sh@25 -- # waitforlisten 63071 /var/tmp/spdk-nbd.sock 00:16:45.436 14:35:53 -- common/autotest_common.sh@817 -- # '[' -z 63071 ']' 00:16:45.436 14:35:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:45.436 14:35:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:45.436 14:35:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:45.436 14:35:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:45.436 14:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:45.699 14:35:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:45.699 14:35:54 -- common/autotest_common.sh@850 -- # return 0 00:16:45.699 14:35:54 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:46.277 Malloc0 00:16:46.277 14:35:54 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:46.536 Malloc1 00:16:46.536 14:35:54 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@12 -- # local i 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:46.536 14:35:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:46.536 /dev/nbd0 00:16:46.794 14:35:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:46.794 14:35:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:46.794 14:35:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:46.794 14:35:55 -- common/autotest_common.sh@855 -- # local i 00:16:46.794 14:35:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:46.794 14:35:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:46.794 14:35:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:46.794 14:35:55 -- common/autotest_common.sh@859 -- # break 00:16:46.794 14:35:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:46.794 14:35:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:46.794 14:35:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:46.794 1+0 records in 00:16:46.794 1+0 records out 00:16:46.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796731 s, 5.1 MB/s 00:16:46.795 14:35:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:46.795 14:35:55 -- common/autotest_common.sh@872 -- # size=4096 00:16:46.795 14:35:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:46.795 14:35:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:46.795 14:35:55 -- common/autotest_common.sh@875 -- # return 0 00:16:46.795 14:35:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:46.795 14:35:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:46.795 14:35:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:46.795 /dev/nbd1 00:16:46.795 14:35:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:46.795 14:35:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:46.795 14:35:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:16:46.795 14:35:55 -- common/autotest_common.sh@855 -- # local i 00:16:46.795 14:35:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:46.795 14:35:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:46.795 14:35:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:16:47.053 14:35:55 -- common/autotest_common.sh@859 -- # break 00:16:47.054 14:35:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:47.054 14:35:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:47.054 14:35:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:47.054 1+0 records in 00:16:47.054 1+0 records out 00:16:47.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617704 s, 6.6 MB/s 00:16:47.054 14:35:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:47.054 14:35:55 -- common/autotest_common.sh@872 -- # size=4096 00:16:47.054 14:35:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:47.054 14:35:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:47.054 14:35:55 -- common/autotest_common.sh@875 -- # return 0 00:16:47.054 14:35:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.054 14:35:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:47.054 14:35:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:47.054 14:35:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:47.054 14:35:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:47.054 14:35:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:47.054 { 00:16:47.054 "nbd_device": "/dev/nbd0", 00:16:47.054 "bdev_name": "Malloc0" 00:16:47.054 }, 00:16:47.054 { 00:16:47.054 "nbd_device": "/dev/nbd1", 00:16:47.054 "bdev_name": "Malloc1" 00:16:47.054 } 00:16:47.054 ]' 00:16:47.054 14:35:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:47.054 { 00:16:47.054 "nbd_device": "/dev/nbd0", 00:16:47.054 "bdev_name": "Malloc0" 00:16:47.054 }, 00:16:47.054 { 00:16:47.054 "nbd_device": "/dev/nbd1", 00:16:47.054 "bdev_name": "Malloc1" 00:16:47.054 } 00:16:47.054 ]' 00:16:47.054 14:35:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:47.313 /dev/nbd1' 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:47.313 /dev/nbd1' 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@65 -- # count=2 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@66 -- # echo 2 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@95 -- # count=2 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:47.313 256+0 records in 00:16:47.313 256+0 records out 00:16:47.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639939 s, 164 MB/s 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:47.313 256+0 records in 00:16:47.313 256+0 records out 00:16:47.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246628 s, 42.5 MB/s 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:47.313 256+0 records in 00:16:47.313 256+0 records out 00:16:47.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292555 s, 35.8 MB/s 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:47.313 14:35:55 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:47.314 14:35:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:47.314 14:35:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:47.314 14:35:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:47.314 14:35:55 -- bdev/nbd_common.sh@51 -- # local i 00:16:47.314 14:35:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.314 14:35:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:47.572 14:35:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:47.572 14:35:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:47.572 14:35:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:47.572 14:35:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.572 14:35:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.572 14:35:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:47.572 14:35:56 -- bdev/nbd_common.sh@41 -- # break 00:16:47.572 14:35:56 -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.572 14:35:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.572 14:35:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:47.830 14:35:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:47.830 14:35:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:47.830 14:35:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:47.830 14:35:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.830 14:35:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.830 14:35:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:47.830 14:35:56 -- bdev/nbd_common.sh@41 -- # break 00:16:47.830 14:35:56 -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.830 14:35:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:47.830 14:35:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:47.830 14:35:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@65 -- # true 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@65 -- # count=0 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@104 -- # count=0 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:48.099 14:35:56 -- bdev/nbd_common.sh@109 -- # return 0 00:16:48.099 14:35:56 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:48.693 14:35:57 -- event/event.sh@35 -- # sleep 3 00:16:50.067 [2024-04-17 14:35:58.649580] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:50.326 [2024-04-17 14:35:58.898169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.326 [2024-04-17 14:35:58.898206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.585 [2024-04-17 14:35:59.156305] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:50.585 [2024-04-17 14:35:59.156670] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:51.960 14:36:00 -- event/event.sh@38 -- # waitforlisten 63071 /var/tmp/spdk-nbd.sock 00:16:51.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:51.960 14:36:00 -- common/autotest_common.sh@817 -- # '[' -z 63071 ']' 00:16:51.960 14:36:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:51.960 14:36:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:51.960 14:36:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:51.960 14:36:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:51.960 14:36:00 -- common/autotest_common.sh@10 -- # set +x 00:16:51.960 14:36:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.960 14:36:00 -- common/autotest_common.sh@850 -- # return 0 00:16:51.960 14:36:00 -- event/event.sh@39 -- # killprocess 63071 00:16:51.960 14:36:00 -- common/autotest_common.sh@936 -- # '[' -z 63071 ']' 00:16:51.960 14:36:00 -- common/autotest_common.sh@940 -- # kill -0 63071 00:16:51.960 14:36:00 -- common/autotest_common.sh@941 -- # uname 00:16:51.960 14:36:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:51.960 14:36:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63071 00:16:51.960 14:36:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:51.960 14:36:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:51.960 14:36:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63071' 00:16:51.960 killing process with pid 63071 00:16:51.960 14:36:00 -- common/autotest_common.sh@955 -- # kill 63071 00:16:51.960 14:36:00 -- common/autotest_common.sh@960 -- # wait 63071 00:16:53.331 spdk_app_start is called in Round 0. 00:16:53.331 Shutdown signal received, stop current app iteration 00:16:53.331 Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 reinitialization... 00:16:53.331 spdk_app_start is called in Round 1. 00:16:53.331 Shutdown signal received, stop current app iteration 00:16:53.331 Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 reinitialization... 00:16:53.331 spdk_app_start is called in Round 2. 00:16:53.331 Shutdown signal received, stop current app iteration 00:16:53.331 Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 reinitialization... 00:16:53.331 spdk_app_start is called in Round 3. 00:16:53.331 Shutdown signal received, stop current app iteration 00:16:53.331 14:36:01 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:16:53.331 14:36:01 -- event/event.sh@42 -- # return 0 00:16:53.331 00:16:53.331 real 0m21.261s 00:16:53.331 user 0m44.314s 00:16:53.331 sys 0m3.380s 00:16:53.331 14:36:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:53.331 14:36:01 -- common/autotest_common.sh@10 -- # set +x 00:16:53.331 ************************************ 00:16:53.331 END TEST app_repeat 00:16:53.331 ************************************ 00:16:53.331 14:36:01 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:16:53.331 14:36:01 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:53.331 14:36:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:53.331 14:36:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:53.331 14:36:01 -- common/autotest_common.sh@10 -- # set +x 00:16:53.331 ************************************ 00:16:53.331 START TEST cpu_locks 00:16:53.331 ************************************ 00:16:53.331 14:36:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:53.644 * Looking for test storage... 00:16:53.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:53.644 14:36:01 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:16:53.644 14:36:01 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:16:53.644 14:36:01 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:16:53.644 14:36:01 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:16:53.644 14:36:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:53.644 14:36:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:53.644 14:36:01 -- common/autotest_common.sh@10 -- # set +x 00:16:53.644 ************************************ 00:16:53.644 START TEST default_locks 00:16:53.644 ************************************ 00:16:53.644 14:36:02 -- common/autotest_common.sh@1111 -- # default_locks 00:16:53.644 14:36:02 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63544 00:16:53.644 14:36:02 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:53.644 14:36:02 -- event/cpu_locks.sh@47 -- # waitforlisten 63544 00:16:53.644 14:36:02 -- common/autotest_common.sh@817 -- # '[' -z 63544 ']' 00:16:53.644 14:36:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.644 14:36:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:53.644 14:36:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.644 14:36:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:53.644 14:36:02 -- common/autotest_common.sh@10 -- # set +x 00:16:53.644 [2024-04-17 14:36:02.216282] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:53.644 [2024-04-17 14:36:02.217023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63544 ] 00:16:53.905 [2024-04-17 14:36:02.388334] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.164 [2024-04-17 14:36:02.735037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.540 14:36:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:55.540 14:36:03 -- common/autotest_common.sh@850 -- # return 0 00:16:55.540 14:36:03 -- event/cpu_locks.sh@49 -- # locks_exist 63544 00:16:55.540 14:36:03 -- event/cpu_locks.sh@22 -- # lslocks -p 63544 00:16:55.540 14:36:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:55.804 14:36:04 -- event/cpu_locks.sh@50 -- # killprocess 63544 00:16:55.804 14:36:04 -- common/autotest_common.sh@936 -- # '[' -z 63544 ']' 00:16:55.804 14:36:04 -- common/autotest_common.sh@940 -- # kill -0 63544 00:16:55.804 14:36:04 -- common/autotest_common.sh@941 -- # uname 00:16:55.804 14:36:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:55.804 14:36:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63544 00:16:55.804 14:36:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:55.804 killing process with pid 63544 00:16:55.804 14:36:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:55.804 14:36:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63544' 00:16:55.804 14:36:04 -- common/autotest_common.sh@955 -- # kill 63544 00:16:55.804 14:36:04 -- common/autotest_common.sh@960 -- # wait 63544 00:16:59.095 14:36:07 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63544 00:16:59.095 14:36:07 -- common/autotest_common.sh@638 -- # local es=0 00:16:59.095 14:36:07 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63544 00:16:59.095 14:36:07 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:16:59.095 14:36:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:59.095 14:36:07 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:16:59.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.095 ERROR: process (pid: 63544) is no longer running 00:16:59.095 14:36:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:59.095 14:36:07 -- common/autotest_common.sh@641 -- # waitforlisten 63544 00:16:59.095 14:36:07 -- common/autotest_common.sh@817 -- # '[' -z 63544 ']' 00:16:59.095 14:36:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.095 14:36:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:59.095 14:36:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.095 14:36:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:59.095 14:36:07 -- common/autotest_common.sh@10 -- # set +x 00:16:59.095 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63544) - No such process 00:16:59.095 14:36:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:59.095 14:36:07 -- common/autotest_common.sh@850 -- # return 1 00:16:59.095 14:36:07 -- common/autotest_common.sh@641 -- # es=1 00:16:59.095 14:36:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:59.095 14:36:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:59.095 14:36:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:59.095 14:36:07 -- event/cpu_locks.sh@54 -- # no_locks 00:16:59.095 14:36:07 -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:59.095 14:36:07 -- event/cpu_locks.sh@26 -- # local lock_files 00:16:59.095 14:36:07 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:59.095 00:16:59.095 real 0m5.089s 00:16:59.095 user 0m5.078s 00:16:59.095 sys 0m0.794s 00:16:59.095 14:36:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:59.095 14:36:07 -- common/autotest_common.sh@10 -- # set +x 00:16:59.095 ************************************ 00:16:59.095 END TEST default_locks 00:16:59.095 ************************************ 00:16:59.095 14:36:07 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:16:59.095 14:36:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:59.095 14:36:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:59.095 14:36:07 -- common/autotest_common.sh@10 -- # set +x 00:16:59.095 ************************************ 00:16:59.095 START TEST default_locks_via_rpc 00:16:59.095 ************************************ 00:16:59.095 14:36:07 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:16:59.095 14:36:07 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63634 00:16:59.095 14:36:07 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:59.095 14:36:07 -- event/cpu_locks.sh@63 -- # waitforlisten 63634 00:16:59.095 14:36:07 -- common/autotest_common.sh@817 -- # '[' -z 63634 ']' 00:16:59.095 14:36:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.095 14:36:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:59.095 14:36:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.095 14:36:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:59.095 14:36:07 -- common/autotest_common.sh@10 -- # set +x 00:16:59.095 [2024-04-17 14:36:07.404036] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:16:59.095 [2024-04-17 14:36:07.404989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63634 ] 00:16:59.095 [2024-04-17 14:36:07.575061] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.368 [2024-04-17 14:36:07.945275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.742 14:36:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:00.742 14:36:09 -- common/autotest_common.sh@850 -- # return 0 00:17:00.742 14:36:09 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:17:00.742 14:36:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.742 14:36:09 -- common/autotest_common.sh@10 -- # set +x 00:17:00.742 14:36:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.742 14:36:09 -- event/cpu_locks.sh@67 -- # no_locks 00:17:00.742 14:36:09 -- event/cpu_locks.sh@26 -- # lock_files=() 00:17:00.742 14:36:09 -- event/cpu_locks.sh@26 -- # local lock_files 00:17:00.742 14:36:09 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:17:00.742 14:36:09 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:17:00.742 14:36:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.742 14:36:09 -- common/autotest_common.sh@10 -- # set +x 00:17:00.742 14:36:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.742 14:36:09 -- event/cpu_locks.sh@71 -- # locks_exist 63634 00:17:00.742 14:36:09 -- event/cpu_locks.sh@22 -- # lslocks -p 63634 00:17:00.742 14:36:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:01.001 14:36:09 -- event/cpu_locks.sh@73 -- # killprocess 63634 00:17:01.001 14:36:09 -- common/autotest_common.sh@936 -- # '[' -z 63634 ']' 00:17:01.001 14:36:09 -- common/autotest_common.sh@940 -- # kill -0 63634 00:17:01.001 14:36:09 -- common/autotest_common.sh@941 -- # uname 00:17:01.001 14:36:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:01.001 14:36:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63634 00:17:01.001 14:36:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:01.001 killing process with pid 63634 00:17:01.001 14:36:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:01.001 14:36:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63634' 00:17:01.001 14:36:09 -- common/autotest_common.sh@955 -- # kill 63634 00:17:01.001 14:36:09 -- common/autotest_common.sh@960 -- # wait 63634 00:17:04.302 00:17:04.302 real 0m5.037s 00:17:04.302 user 0m4.982s 00:17:04.302 sys 0m0.748s 00:17:04.302 14:36:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:04.302 14:36:12 -- common/autotest_common.sh@10 -- # set +x 00:17:04.302 ************************************ 00:17:04.302 END TEST default_locks_via_rpc 00:17:04.302 ************************************ 00:17:04.302 14:36:12 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:17:04.302 14:36:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:04.302 14:36:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:04.302 14:36:12 -- common/autotest_common.sh@10 -- # set +x 00:17:04.302 ************************************ 00:17:04.302 START TEST non_locking_app_on_locked_coremask 00:17:04.302 ************************************ 00:17:04.302 14:36:12 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:17:04.302 14:36:12 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:04.302 14:36:12 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63723 00:17:04.302 14:36:12 -- event/cpu_locks.sh@81 -- # waitforlisten 63723 /var/tmp/spdk.sock 00:17:04.302 14:36:12 -- common/autotest_common.sh@817 -- # '[' -z 63723 ']' 00:17:04.302 14:36:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.302 14:36:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:04.302 14:36:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.302 14:36:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:04.302 14:36:12 -- common/autotest_common.sh@10 -- # set +x 00:17:04.302 [2024-04-17 14:36:12.622949] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:04.302 [2024-04-17 14:36:12.623385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63723 ] 00:17:04.302 [2024-04-17 14:36:12.807751] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.561 [2024-04-17 14:36:13.069404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:05.937 14:36:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:05.937 14:36:14 -- common/autotest_common.sh@850 -- # return 0 00:17:05.937 14:36:14 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63745 00:17:05.937 14:36:14 -- event/cpu_locks.sh@85 -- # waitforlisten 63745 /var/tmp/spdk2.sock 00:17:05.937 14:36:14 -- common/autotest_common.sh@817 -- # '[' -z 63745 ']' 00:17:05.937 14:36:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:05.937 14:36:14 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:17:05.937 14:36:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:05.937 14:36:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:05.937 14:36:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:05.937 14:36:14 -- common/autotest_common.sh@10 -- # set +x 00:17:05.937 [2024-04-17 14:36:14.244557] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:05.937 [2024-04-17 14:36:14.245475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63745 ] 00:17:05.937 [2024-04-17 14:36:14.432528] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:05.937 [2024-04-17 14:36:14.432591] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.505 [2024-04-17 14:36:14.965320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.035 14:36:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:09.035 14:36:17 -- common/autotest_common.sh@850 -- # return 0 00:17:09.035 14:36:17 -- event/cpu_locks.sh@87 -- # locks_exist 63723 00:17:09.035 14:36:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:09.035 14:36:17 -- event/cpu_locks.sh@22 -- # lslocks -p 63723 00:17:09.601 14:36:18 -- event/cpu_locks.sh@89 -- # killprocess 63723 00:17:09.601 14:36:18 -- common/autotest_common.sh@936 -- # '[' -z 63723 ']' 00:17:09.601 14:36:18 -- common/autotest_common.sh@940 -- # kill -0 63723 00:17:09.601 14:36:18 -- common/autotest_common.sh@941 -- # uname 00:17:09.601 14:36:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.601 14:36:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63723 00:17:09.601 14:36:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:09.601 killing process with pid 63723 00:17:09.601 14:36:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:09.601 14:36:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63723' 00:17:09.601 14:36:18 -- common/autotest_common.sh@955 -- # kill 63723 00:17:09.601 14:36:18 -- common/autotest_common.sh@960 -- # wait 63723 00:17:16.206 14:36:23 -- event/cpu_locks.sh@90 -- # killprocess 63745 00:17:16.206 14:36:23 -- common/autotest_common.sh@936 -- # '[' -z 63745 ']' 00:17:16.206 14:36:23 -- common/autotest_common.sh@940 -- # kill -0 63745 00:17:16.206 14:36:23 -- common/autotest_common.sh@941 -- # uname 00:17:16.206 14:36:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:16.206 14:36:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63745 00:17:16.206 14:36:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:16.206 killing process with pid 63745 00:17:16.206 14:36:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:16.206 14:36:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63745' 00:17:16.206 14:36:23 -- common/autotest_common.sh@955 -- # kill 63745 00:17:16.206 14:36:23 -- common/autotest_common.sh@960 -- # wait 63745 00:17:18.117 ************************************ 00:17:18.117 END TEST non_locking_app_on_locked_coremask 00:17:18.117 ************************************ 00:17:18.117 00:17:18.117 real 0m13.848s 00:17:18.117 user 0m14.299s 00:17:18.117 sys 0m1.606s 00:17:18.117 14:36:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:18.117 14:36:26 -- common/autotest_common.sh@10 -- # set +x 00:17:18.117 14:36:26 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:17:18.117 14:36:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:18.117 14:36:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:18.117 14:36:26 -- common/autotest_common.sh@10 -- # set +x 00:17:18.117 ************************************ 00:17:18.117 START TEST locking_app_on_unlocked_coremask 00:17:18.117 ************************************ 00:17:18.117 14:36:26 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:17:18.117 14:36:26 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63919 00:17:18.117 14:36:26 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:17:18.117 14:36:26 -- event/cpu_locks.sh@99 -- # waitforlisten 63919 /var/tmp/spdk.sock 00:17:18.117 14:36:26 -- common/autotest_common.sh@817 -- # '[' -z 63919 ']' 00:17:18.117 14:36:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.117 14:36:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:18.117 14:36:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.117 14:36:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:18.117 14:36:26 -- common/autotest_common.sh@10 -- # set +x 00:17:18.117 [2024-04-17 14:36:26.625347] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:18.117 [2024-04-17 14:36:26.625785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63919 ] 00:17:18.375 [2024-04-17 14:36:26.813417] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:18.375 [2024-04-17 14:36:26.813664] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.634 [2024-04-17 14:36:27.153628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.012 14:36:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:20.012 14:36:28 -- common/autotest_common.sh@850 -- # return 0 00:17:20.012 14:36:28 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63946 00:17:20.012 14:36:28 -- event/cpu_locks.sh@103 -- # waitforlisten 63946 /var/tmp/spdk2.sock 00:17:20.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:20.012 14:36:28 -- common/autotest_common.sh@817 -- # '[' -z 63946 ']' 00:17:20.012 14:36:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:20.012 14:36:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:20.012 14:36:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:20.012 14:36:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:20.012 14:36:28 -- common/autotest_common.sh@10 -- # set +x 00:17:20.012 14:36:28 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:17:20.012 [2024-04-17 14:36:28.350719] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:20.012 [2024-04-17 14:36:28.351595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63946 ] 00:17:20.012 [2024-04-17 14:36:28.538927] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.580 [2024-04-17 14:36:29.053475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.111 14:36:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:23.111 14:36:31 -- common/autotest_common.sh@850 -- # return 0 00:17:23.111 14:36:31 -- event/cpu_locks.sh@105 -- # locks_exist 63946 00:17:23.111 14:36:31 -- event/cpu_locks.sh@22 -- # lslocks -p 63946 00:17:23.111 14:36:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:23.678 14:36:32 -- event/cpu_locks.sh@107 -- # killprocess 63919 00:17:23.678 14:36:32 -- common/autotest_common.sh@936 -- # '[' -z 63919 ']' 00:17:23.678 14:36:32 -- common/autotest_common.sh@940 -- # kill -0 63919 00:17:23.678 14:36:32 -- common/autotest_common.sh@941 -- # uname 00:17:23.678 14:36:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.678 14:36:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63919 00:17:23.678 14:36:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:23.678 killing process with pid 63919 00:17:23.678 14:36:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:23.678 14:36:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63919' 00:17:23.678 14:36:32 -- common/autotest_common.sh@955 -- # kill 63919 00:17:23.678 14:36:32 -- common/autotest_common.sh@960 -- # wait 63919 00:17:28.989 14:36:37 -- event/cpu_locks.sh@108 -- # killprocess 63946 00:17:28.989 14:36:37 -- common/autotest_common.sh@936 -- # '[' -z 63946 ']' 00:17:28.989 14:36:37 -- common/autotest_common.sh@940 -- # kill -0 63946 00:17:28.989 14:36:37 -- common/autotest_common.sh@941 -- # uname 00:17:28.989 14:36:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:28.989 14:36:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63946 00:17:28.989 14:36:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:28.989 14:36:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:28.989 killing process with pid 63946 00:17:28.989 14:36:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63946' 00:17:28.989 14:36:37 -- common/autotest_common.sh@955 -- # kill 63946 00:17:28.989 14:36:37 -- common/autotest_common.sh@960 -- # wait 63946 00:17:31.518 ************************************ 00:17:31.518 END TEST locking_app_on_unlocked_coremask 00:17:31.518 ************************************ 00:17:31.518 00:17:31.518 real 0m13.641s 00:17:31.518 user 0m14.226s 00:17:31.518 sys 0m1.513s 00:17:31.518 14:36:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:31.518 14:36:40 -- common/autotest_common.sh@10 -- # set +x 00:17:31.775 14:36:40 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:17:31.775 14:36:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:31.775 14:36:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:31.775 14:36:40 -- common/autotest_common.sh@10 -- # set +x 00:17:31.775 ************************************ 00:17:31.775 START TEST locking_app_on_locked_coremask 00:17:31.775 ************************************ 00:17:31.775 14:36:40 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:17:31.775 14:36:40 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64116 00:17:31.775 14:36:40 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:31.775 14:36:40 -- event/cpu_locks.sh@116 -- # waitforlisten 64116 /var/tmp/spdk.sock 00:17:31.775 14:36:40 -- common/autotest_common.sh@817 -- # '[' -z 64116 ']' 00:17:31.775 14:36:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.775 14:36:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:31.775 14:36:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.775 14:36:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:31.775 14:36:40 -- common/autotest_common.sh@10 -- # set +x 00:17:31.775 [2024-04-17 14:36:40.375911] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:31.775 [2024-04-17 14:36:40.376305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64116 ] 00:17:32.032 [2024-04-17 14:36:40.567872] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.290 [2024-04-17 14:36:40.884604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.696 14:36:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:33.696 14:36:41 -- common/autotest_common.sh@850 -- # return 0 00:17:33.696 14:36:41 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64137 00:17:33.696 14:36:41 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64137 /var/tmp/spdk2.sock 00:17:33.696 14:36:41 -- common/autotest_common.sh@638 -- # local es=0 00:17:33.696 14:36:41 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:17:33.696 14:36:41 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 64137 /var/tmp/spdk2.sock 00:17:33.696 14:36:41 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:17:33.696 14:36:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:33.696 14:36:41 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:17:33.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:33.696 14:36:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:33.696 14:36:41 -- common/autotest_common.sh@641 -- # waitforlisten 64137 /var/tmp/spdk2.sock 00:17:33.696 14:36:41 -- common/autotest_common.sh@817 -- # '[' -z 64137 ']' 00:17:33.696 14:36:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:33.696 14:36:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:33.696 14:36:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:33.696 14:36:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:33.696 14:36:41 -- common/autotest_common.sh@10 -- # set +x 00:17:33.696 [2024-04-17 14:36:42.023402] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:33.696 [2024-04-17 14:36:42.023884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64137 ] 00:17:33.696 [2024-04-17 14:36:42.211745] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64116 has claimed it. 00:17:33.696 [2024-04-17 14:36:42.211828] app.c: 814:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:34.263 ERROR: process (pid: 64137) is no longer running 00:17:34.263 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (64137) - No such process 00:17:34.263 14:36:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:34.263 14:36:42 -- common/autotest_common.sh@850 -- # return 1 00:17:34.263 14:36:42 -- common/autotest_common.sh@641 -- # es=1 00:17:34.263 14:36:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:34.263 14:36:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:34.263 14:36:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:34.263 14:36:42 -- event/cpu_locks.sh@122 -- # locks_exist 64116 00:17:34.263 14:36:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:34.263 14:36:42 -- event/cpu_locks.sh@22 -- # lslocks -p 64116 00:17:34.830 14:36:43 -- event/cpu_locks.sh@124 -- # killprocess 64116 00:17:34.830 14:36:43 -- common/autotest_common.sh@936 -- # '[' -z 64116 ']' 00:17:34.830 14:36:43 -- common/autotest_common.sh@940 -- # kill -0 64116 00:17:34.830 14:36:43 -- common/autotest_common.sh@941 -- # uname 00:17:34.830 14:36:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.830 14:36:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64116 00:17:34.830 14:36:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:34.830 14:36:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:34.830 14:36:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64116' 00:17:34.830 killing process with pid 64116 00:17:34.830 14:36:43 -- common/autotest_common.sh@955 -- # kill 64116 00:17:34.830 14:36:43 -- common/autotest_common.sh@960 -- # wait 64116 00:17:37.424 00:17:37.424 real 0m5.574s 00:17:37.424 user 0m5.780s 00:17:37.424 sys 0m0.966s 00:17:37.425 14:36:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:37.425 14:36:45 -- common/autotest_common.sh@10 -- # set +x 00:17:37.425 ************************************ 00:17:37.425 END TEST locking_app_on_locked_coremask 00:17:37.425 ************************************ 00:17:37.425 14:36:45 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:17:37.425 14:36:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:37.425 14:36:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:37.425 14:36:45 -- common/autotest_common.sh@10 -- # set +x 00:17:37.425 ************************************ 00:17:37.425 START TEST locking_overlapped_coremask 00:17:37.425 ************************************ 00:17:37.425 14:36:45 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:17:37.425 14:36:45 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64213 00:17:37.425 14:36:45 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:37.425 14:36:45 -- event/cpu_locks.sh@133 -- # waitforlisten 64213 /var/tmp/spdk.sock 00:17:37.425 14:36:45 -- common/autotest_common.sh@817 -- # '[' -z 64213 ']' 00:17:37.425 14:36:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.425 14:36:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:37.425 14:36:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.425 14:36:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:37.425 14:36:45 -- common/autotest_common.sh@10 -- # set +x 00:17:37.684 [2024-04-17 14:36:46.047210] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:37.684 [2024-04-17 14:36:46.047605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64213 ] 00:17:37.684 [2024-04-17 14:36:46.217881] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:37.980 [2024-04-17 14:36:46.488078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.980 [2024-04-17 14:36:46.488095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.980 [2024-04-17 14:36:46.488103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.353 14:36:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:39.353 14:36:47 -- common/autotest_common.sh@850 -- # return 0 00:17:39.353 14:36:47 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64241 00:17:39.353 14:36:47 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64241 /var/tmp/spdk2.sock 00:17:39.353 14:36:47 -- common/autotest_common.sh@638 -- # local es=0 00:17:39.353 14:36:47 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 64241 /var/tmp/spdk2.sock 00:17:39.353 14:36:47 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:17:39.353 14:36:47 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:17:39.353 14:36:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:39.353 14:36:47 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:17:39.353 14:36:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:39.353 14:36:47 -- common/autotest_common.sh@641 -- # waitforlisten 64241 /var/tmp/spdk2.sock 00:17:39.353 14:36:47 -- common/autotest_common.sh@817 -- # '[' -z 64241 ']' 00:17:39.353 14:36:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:39.353 14:36:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:39.353 14:36:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:39.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:39.353 14:36:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:39.353 14:36:47 -- common/autotest_common.sh@10 -- # set +x 00:17:39.353 [2024-04-17 14:36:47.658378] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:39.353 [2024-04-17 14:36:47.658747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64241 ] 00:17:39.353 [2024-04-17 14:36:47.834761] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64213 has claimed it. 00:17:39.353 [2024-04-17 14:36:47.834834] app.c: 814:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:17:39.919 ERROR: process (pid: 64241) is no longer running 00:17:39.919 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (64241) - No such process 00:17:39.919 14:36:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:39.919 14:36:48 -- common/autotest_common.sh@850 -- # return 1 00:17:39.919 14:36:48 -- common/autotest_common.sh@641 -- # es=1 00:17:39.919 14:36:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:39.919 14:36:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:39.919 14:36:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:39.919 14:36:48 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:17:39.919 14:36:48 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:39.919 14:36:48 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:39.919 14:36:48 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:39.919 14:36:48 -- event/cpu_locks.sh@141 -- # killprocess 64213 00:17:39.919 14:36:48 -- common/autotest_common.sh@936 -- # '[' -z 64213 ']' 00:17:39.919 14:36:48 -- common/autotest_common.sh@940 -- # kill -0 64213 00:17:39.919 14:36:48 -- common/autotest_common.sh@941 -- # uname 00:17:39.919 14:36:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.919 14:36:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64213 00:17:39.919 14:36:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:39.919 14:36:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:39.919 14:36:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64213' 00:17:39.919 killing process with pid 64213 00:17:39.919 14:36:48 -- common/autotest_common.sh@955 -- # kill 64213 00:17:39.919 14:36:48 -- common/autotest_common.sh@960 -- # wait 64213 00:17:42.500 00:17:42.500 real 0m5.074s 00:17:42.500 user 0m13.164s 00:17:42.500 sys 0m0.646s 00:17:42.500 ************************************ 00:17:42.500 END TEST locking_overlapped_coremask 00:17:42.500 ************************************ 00:17:42.500 14:36:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:42.500 14:36:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.500 14:36:51 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:17:42.500 14:36:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:42.500 14:36:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:42.500 14:36:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.758 ************************************ 00:17:42.758 START TEST locking_overlapped_coremask_via_rpc 00:17:42.758 ************************************ 00:17:42.758 14:36:51 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:17:42.758 14:36:51 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64309 00:17:42.758 14:36:51 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:17:42.758 14:36:51 -- event/cpu_locks.sh@149 -- # waitforlisten 64309 /var/tmp/spdk.sock 00:17:42.758 14:36:51 -- common/autotest_common.sh@817 -- # '[' -z 64309 ']' 00:17:42.758 14:36:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.758 14:36:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.758 14:36:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.758 14:36:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.758 14:36:51 -- common/autotest_common.sh@10 -- # set +x 00:17:42.758 [2024-04-17 14:36:51.284338] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:42.758 [2024-04-17 14:36:51.284774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64309 ] 00:17:43.016 [2024-04-17 14:36:51.473511] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:43.016 [2024-04-17 14:36:51.473790] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:43.276 [2024-04-17 14:36:51.807197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.276 [2024-04-17 14:36:51.807279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.276 [2024-04-17 14:36:51.807292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:44.659 14:36:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:44.659 14:36:52 -- common/autotest_common.sh@850 -- # return 0 00:17:44.659 14:36:52 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64338 00:17:44.659 14:36:52 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:17:44.659 14:36:52 -- event/cpu_locks.sh@153 -- # waitforlisten 64338 /var/tmp/spdk2.sock 00:17:44.659 14:36:52 -- common/autotest_common.sh@817 -- # '[' -z 64338 ']' 00:17:44.659 14:36:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:44.659 14:36:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:44.659 14:36:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:44.659 14:36:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:44.659 14:36:52 -- common/autotest_common.sh@10 -- # set +x 00:17:44.659 [2024-04-17 14:36:53.050933] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:44.659 [2024-04-17 14:36:53.051362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64338 ] 00:17:44.659 [2024-04-17 14:36:53.258373] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:17:44.659 [2024-04-17 14:36:53.258454] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:45.224 [2024-04-17 14:36:53.805152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:45.224 [2024-04-17 14:36:53.815672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.224 [2024-04-17 14:36:53.815678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:47.750 14:36:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:47.750 14:36:55 -- common/autotest_common.sh@850 -- # return 0 00:17:47.750 14:36:55 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:17:47.750 14:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.750 14:36:55 -- common/autotest_common.sh@10 -- # set +x 00:17:47.750 14:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:47.750 14:36:55 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:47.750 14:36:55 -- common/autotest_common.sh@638 -- # local es=0 00:17:47.750 14:36:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:47.750 14:36:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:47.750 14:36:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:47.750 14:36:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:47.750 14:36:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:47.750 14:36:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:17:47.750 14:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:47.750 14:36:55 -- common/autotest_common.sh@10 -- # set +x 00:17:47.750 [2024-04-17 14:36:55.996705] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64309 has claimed it. 00:17:47.750 request: 00:17:47.750 { 00:17:47.750 "method": "framework_enable_cpumask_locks", 00:17:47.750 "req_id": 1 00:17:47.750 } 00:17:47.750 Got JSON-RPC error response 00:17:47.750 response: 00:17:47.750 { 00:17:47.750 "code": -32603, 00:17:47.750 "message": "Failed to claim CPU core: 2" 00:17:47.750 } 00:17:47.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.750 14:36:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:47.750 14:36:56 -- common/autotest_common.sh@641 -- # es=1 00:17:47.750 14:36:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:47.750 14:36:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:47.750 14:36:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:47.750 14:36:56 -- event/cpu_locks.sh@158 -- # waitforlisten 64309 /var/tmp/spdk.sock 00:17:47.750 14:36:56 -- common/autotest_common.sh@817 -- # '[' -z 64309 ']' 00:17:47.750 14:36:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.750 14:36:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:47.750 14:36:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.750 14:36:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:47.750 14:36:56 -- common/autotest_common.sh@10 -- # set +x 00:17:47.750 14:36:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:47.750 14:36:56 -- common/autotest_common.sh@850 -- # return 0 00:17:47.750 14:36:56 -- event/cpu_locks.sh@159 -- # waitforlisten 64338 /var/tmp/spdk2.sock 00:17:47.750 14:36:56 -- common/autotest_common.sh@817 -- # '[' -z 64338 ']' 00:17:47.750 14:36:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:17:47.750 14:36:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:47.750 14:36:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:17:47.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:17:47.750 14:36:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:47.750 14:36:56 -- common/autotest_common.sh@10 -- # set +x 00:17:48.010 14:36:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:48.010 14:36:56 -- common/autotest_common.sh@850 -- # return 0 00:17:48.010 14:36:56 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:17:48.010 14:36:56 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:17:48.010 14:36:56 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:17:48.010 14:36:56 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:17:48.010 00:17:48.010 real 0m5.336s 00:17:48.010 user 0m1.408s 00:17:48.010 sys 0m0.246s 00:17:48.010 14:36:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:48.010 14:36:56 -- common/autotest_common.sh@10 -- # set +x 00:17:48.010 ************************************ 00:17:48.010 END TEST locking_overlapped_coremask_via_rpc 00:17:48.010 ************************************ 00:17:48.010 14:36:56 -- event/cpu_locks.sh@174 -- # cleanup 00:17:48.010 14:36:56 -- event/cpu_locks.sh@15 -- # [[ -z 64309 ]] 00:17:48.010 14:36:56 -- event/cpu_locks.sh@15 -- # killprocess 64309 00:17:48.010 14:36:56 -- common/autotest_common.sh@936 -- # '[' -z 64309 ']' 00:17:48.010 14:36:56 -- common/autotest_common.sh@940 -- # kill -0 64309 00:17:48.010 14:36:56 -- common/autotest_common.sh@941 -- # uname 00:17:48.010 14:36:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:48.010 14:36:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64309 00:17:48.010 killing process with pid 64309 00:17:48.010 14:36:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:48.010 14:36:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:48.010 14:36:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64309' 00:17:48.010 14:36:56 -- common/autotest_common.sh@955 -- # kill 64309 00:17:48.010 14:36:56 -- common/autotest_common.sh@960 -- # wait 64309 00:17:51.292 14:36:59 -- event/cpu_locks.sh@16 -- # [[ -z 64338 ]] 00:17:51.292 14:36:59 -- event/cpu_locks.sh@16 -- # killprocess 64338 00:17:51.292 14:36:59 -- common/autotest_common.sh@936 -- # '[' -z 64338 ']' 00:17:51.292 14:36:59 -- common/autotest_common.sh@940 -- # kill -0 64338 00:17:51.292 14:36:59 -- common/autotest_common.sh@941 -- # uname 00:17:51.292 14:36:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:51.292 14:36:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64338 00:17:51.292 killing process with pid 64338 00:17:51.292 14:36:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:51.292 14:36:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:51.292 14:36:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64338' 00:17:51.292 14:36:59 -- common/autotest_common.sh@955 -- # kill 64338 00:17:51.292 14:36:59 -- common/autotest_common.sh@960 -- # wait 64338 00:17:53.827 14:37:02 -- event/cpu_locks.sh@18 -- # rm -f 00:17:53.827 14:37:02 -- event/cpu_locks.sh@1 -- # cleanup 00:17:53.827 14:37:02 -- event/cpu_locks.sh@15 -- # [[ -z 64309 ]] 00:17:53.827 14:37:02 -- event/cpu_locks.sh@15 -- # killprocess 64309 00:17:53.827 14:37:02 -- common/autotest_common.sh@936 -- # '[' -z 64309 ']' 00:17:53.827 14:37:02 -- common/autotest_common.sh@940 -- # kill -0 64309 00:17:53.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (64309) - No such process 00:17:53.827 Process with pid 64309 is not found 00:17:53.827 Process with pid 64338 is not found 00:17:53.827 14:37:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 64309 is not found' 00:17:53.827 14:37:02 -- event/cpu_locks.sh@16 -- # [[ -z 64338 ]] 00:17:53.827 14:37:02 -- event/cpu_locks.sh@16 -- # killprocess 64338 00:17:53.827 14:37:02 -- common/autotest_common.sh@936 -- # '[' -z 64338 ']' 00:17:53.827 14:37:02 -- common/autotest_common.sh@940 -- # kill -0 64338 00:17:53.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (64338) - No such process 00:17:53.827 14:37:02 -- common/autotest_common.sh@963 -- # echo 'Process with pid 64338 is not found' 00:17:53.827 14:37:02 -- event/cpu_locks.sh@18 -- # rm -f 00:17:53.827 ************************************ 00:17:53.827 END TEST cpu_locks 00:17:53.827 ************************************ 00:17:53.827 00:17:53.827 real 1m0.389s 00:17:53.827 user 1m40.024s 00:17:53.827 sys 0m7.987s 00:17:53.827 14:37:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:53.827 14:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:53.827 ************************************ 00:17:53.827 END TEST event 00:17:53.827 ************************************ 00:17:53.827 00:17:53.827 real 1m34.354s 00:17:53.827 user 2m42.413s 00:17:53.827 sys 0m12.864s 00:17:53.827 14:37:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:53.827 14:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:53.827 14:37:02 -- spdk/autotest.sh@177 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:53.827 14:37:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:53.827 14:37:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.827 14:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:54.086 ************************************ 00:17:54.086 START TEST thread 00:17:54.086 ************************************ 00:17:54.086 14:37:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:17:54.086 * Looking for test storage... 00:17:54.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:17:54.086 14:37:02 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:54.086 14:37:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:17:54.086 14:37:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:54.086 14:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:54.086 ************************************ 00:17:54.086 START TEST thread_poller_perf 00:17:54.086 ************************************ 00:17:54.086 14:37:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:17:54.344 [2024-04-17 14:37:02.697533] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:54.344 [2024-04-17 14:37:02.697842] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64550 ] 00:17:54.344 [2024-04-17 14:37:02.872423] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.922 [2024-04-17 14:37:03.212512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.922 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:17:56.309 ====================================== 00:17:56.309 busy:2116190134 (cyc) 00:17:56.309 total_run_count: 338000 00:17:56.309 tsc_hz: 2100000000 (cyc) 00:17:56.309 ====================================== 00:17:56.309 poller_cost: 6260 (cyc), 2980 (nsec) 00:17:56.309 00:17:56.309 real 0m2.036s 00:17:56.309 user 0m1.795s 00:17:56.309 sys 0m0.128s 00:17:56.309 14:37:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:56.309 14:37:04 -- common/autotest_common.sh@10 -- # set +x 00:17:56.309 ************************************ 00:17:56.309 END TEST thread_poller_perf 00:17:56.309 ************************************ 00:17:56.309 14:37:04 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:56.309 14:37:04 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:17:56.309 14:37:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:56.309 14:37:04 -- common/autotest_common.sh@10 -- # set +x 00:17:56.309 ************************************ 00:17:56.309 START TEST thread_poller_perf 00:17:56.309 ************************************ 00:17:56.309 14:37:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:17:56.309 [2024-04-17 14:37:04.861802] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:56.309 [2024-04-17 14:37:04.862236] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64598 ] 00:17:56.579 [2024-04-17 14:37:05.037382] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.852 [2024-04-17 14:37:05.280680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.852 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:17:58.262 ====================================== 00:17:58.262 busy:2104485846 (cyc) 00:17:58.262 total_run_count: 4540000 00:17:58.262 tsc_hz: 2100000000 (cyc) 00:17:58.262 ====================================== 00:17:58.262 poller_cost: 463 (cyc), 220 (nsec) 00:17:58.262 00:17:58.262 real 0m1.923s 00:17:58.262 user 0m1.679s 00:17:58.262 sys 0m0.133s 00:17:58.262 14:37:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:58.262 14:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:58.262 ************************************ 00:17:58.262 END TEST thread_poller_perf 00:17:58.262 ************************************ 00:17:58.262 14:37:06 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:17:58.262 ************************************ 00:17:58.262 END TEST thread 00:17:58.262 ************************************ 00:17:58.262 00:17:58.262 real 0m4.310s 00:17:58.262 user 0m3.596s 00:17:58.262 sys 0m0.451s 00:17:58.262 14:37:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:58.262 14:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:58.262 14:37:06 -- spdk/autotest.sh@178 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:17:58.262 14:37:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:58.262 14:37:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:58.262 14:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:58.520 ************************************ 00:17:58.520 START TEST accel 00:17:58.520 ************************************ 00:17:58.520 14:37:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:17:58.520 * Looking for test storage... 00:17:58.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:17:58.520 14:37:07 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:17:58.520 14:37:07 -- accel/accel.sh@82 -- # get_expected_opcs 00:17:58.520 14:37:07 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:58.520 14:37:07 -- accel/accel.sh@62 -- # spdk_tgt_pid=64684 00:17:58.520 14:37:07 -- accel/accel.sh@63 -- # waitforlisten 64684 00:17:58.520 14:37:07 -- common/autotest_common.sh@817 -- # '[' -z 64684 ']' 00:17:58.520 14:37:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.520 14:37:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:58.520 14:37:07 -- accel/accel.sh@61 -- # build_accel_config 00:17:58.520 14:37:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:58.520 14:37:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.520 14:37:07 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:17:58.520 14:37:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:58.520 14:37:07 -- common/autotest_common.sh@10 -- # set +x 00:17:58.520 14:37:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:58.520 14:37:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:58.520 14:37:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:58.520 14:37:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:58.520 14:37:07 -- accel/accel.sh@40 -- # local IFS=, 00:17:58.520 14:37:07 -- accel/accel.sh@41 -- # jq -r . 00:17:58.777 [2024-04-17 14:37:07.143363] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:17:58.777 [2024-04-17 14:37:07.143741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64684 ] 00:17:58.777 [2024-04-17 14:37:07.332787] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.342 [2024-04-17 14:37:07.665861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.275 14:37:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:00.275 14:37:08 -- common/autotest_common.sh@850 -- # return 0 00:18:00.275 14:37:08 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:18:00.275 14:37:08 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:18:00.275 14:37:08 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:18:00.275 14:37:08 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:18:00.275 14:37:08 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:18:00.275 14:37:08 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:18:00.275 14:37:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:00.275 14:37:08 -- common/autotest_common.sh@10 -- # set +x 00:18:00.275 14:37:08 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:18:00.275 14:37:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:00.275 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.275 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.275 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.275 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.275 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.275 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.275 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.275 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.275 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.275 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.275 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.275 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.275 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.275 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.275 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.275 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.275 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.275 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.275 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.276 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.276 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.276 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.276 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.276 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.276 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.276 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.276 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.276 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.276 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.276 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.276 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.276 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.276 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.276 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.276 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.276 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.276 14:37:08 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # IFS== 00:18:00.276 14:37:08 -- accel/accel.sh@72 -- # read -r opc module 00:18:00.276 14:37:08 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:00.276 14:37:08 -- accel/accel.sh@75 -- # killprocess 64684 00:18:00.276 14:37:08 -- common/autotest_common.sh@936 -- # '[' -z 64684 ']' 00:18:00.276 14:37:08 -- common/autotest_common.sh@940 -- # kill -0 64684 00:18:00.276 14:37:08 -- common/autotest_common.sh@941 -- # uname 00:18:00.276 14:37:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.276 14:37:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64684 00:18:00.276 14:37:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:00.276 14:37:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:00.276 14:37:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64684' 00:18:00.276 killing process with pid 64684 00:18:00.276 14:37:08 -- common/autotest_common.sh@955 -- # kill 64684 00:18:00.276 14:37:08 -- common/autotest_common.sh@960 -- # wait 64684 00:18:03.556 14:37:11 -- accel/accel.sh@76 -- # trap - ERR 00:18:03.556 14:37:11 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:18:03.556 14:37:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:03.556 14:37:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:03.556 14:37:11 -- common/autotest_common.sh@10 -- # set +x 00:18:03.556 14:37:11 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:18:03.556 14:37:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:18:03.556 14:37:11 -- accel/accel.sh@12 -- # build_accel_config 00:18:03.556 14:37:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:03.556 14:37:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:03.556 14:37:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:03.556 14:37:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:03.556 14:37:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:03.556 14:37:11 -- accel/accel.sh@40 -- # local IFS=, 00:18:03.556 14:37:11 -- accel/accel.sh@41 -- # jq -r . 00:18:03.556 14:37:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:03.556 14:37:11 -- common/autotest_common.sh@10 -- # set +x 00:18:03.556 14:37:11 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:18:03.556 14:37:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:03.556 14:37:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:03.556 14:37:11 -- common/autotest_common.sh@10 -- # set +x 00:18:03.556 ************************************ 00:18:03.556 START TEST accel_missing_filename 00:18:03.556 ************************************ 00:18:03.556 14:37:11 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:18:03.556 14:37:11 -- common/autotest_common.sh@638 -- # local es=0 00:18:03.556 14:37:11 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:18:03.556 14:37:11 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:18:03.556 14:37:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:03.556 14:37:11 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:18:03.556 14:37:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:03.556 14:37:11 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:18:03.556 14:37:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:18:03.556 14:37:11 -- accel/accel.sh@12 -- # build_accel_config 00:18:03.556 14:37:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:03.556 14:37:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:03.556 14:37:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:03.556 14:37:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:03.556 14:37:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:03.556 14:37:11 -- accel/accel.sh@40 -- # local IFS=, 00:18:03.556 14:37:11 -- accel/accel.sh@41 -- # jq -r . 00:18:03.556 [2024-04-17 14:37:11.783615] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:03.556 [2024-04-17 14:37:11.783985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64778 ] 00:18:03.556 [2024-04-17 14:37:11.955246] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.814 [2024-04-17 14:37:12.208649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.072 [2024-04-17 14:37:12.472343] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:04.638 [2024-04-17 14:37:13.063393] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:18:05.203 A filename is required. 00:18:05.203 14:37:13 -- common/autotest_common.sh@641 -- # es=234 00:18:05.203 14:37:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:05.203 14:37:13 -- common/autotest_common.sh@650 -- # es=106 00:18:05.203 14:37:13 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:05.203 14:37:13 -- common/autotest_common.sh@658 -- # es=1 00:18:05.203 14:37:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:05.203 00:18:05.203 real 0m1.824s 00:18:05.203 user 0m1.537s 00:18:05.203 sys 0m0.203s 00:18:05.203 14:37:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:05.203 14:37:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.203 ************************************ 00:18:05.203 END TEST accel_missing_filename 00:18:05.203 ************************************ 00:18:05.203 14:37:13 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:05.203 14:37:13 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:18:05.203 14:37:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:05.203 14:37:13 -- common/autotest_common.sh@10 -- # set +x 00:18:05.203 ************************************ 00:18:05.203 START TEST accel_compress_verify 00:18:05.203 ************************************ 00:18:05.203 14:37:13 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:05.203 14:37:13 -- common/autotest_common.sh@638 -- # local es=0 00:18:05.203 14:37:13 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:05.203 14:37:13 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:18:05.203 14:37:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:05.203 14:37:13 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:18:05.203 14:37:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:05.203 14:37:13 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:05.203 14:37:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:05.203 14:37:13 -- accel/accel.sh@12 -- # build_accel_config 00:18:05.203 14:37:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:05.203 14:37:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:05.203 14:37:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:05.203 14:37:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:05.203 14:37:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:05.203 14:37:13 -- accel/accel.sh@40 -- # local IFS=, 00:18:05.203 14:37:13 -- accel/accel.sh@41 -- # jq -r . 00:18:05.203 [2024-04-17 14:37:13.731446] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:05.203 [2024-04-17 14:37:13.732335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64824 ] 00:18:05.461 [2024-04-17 14:37:13.919025] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.731 [2024-04-17 14:37:14.166047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.989 [2024-04-17 14:37:14.431099] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:06.553 [2024-04-17 14:37:15.027007] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:18:07.119 00:18:07.119 Compression does not support the verify option, aborting. 00:18:07.119 14:37:15 -- common/autotest_common.sh@641 -- # es=161 00:18:07.119 14:37:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:07.119 14:37:15 -- common/autotest_common.sh@650 -- # es=33 00:18:07.119 14:37:15 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:07.119 14:37:15 -- common/autotest_common.sh@658 -- # es=1 00:18:07.119 14:37:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:07.119 00:18:07.119 real 0m1.827s 00:18:07.119 user 0m1.565s 00:18:07.119 sys 0m0.212s 00:18:07.119 ************************************ 00:18:07.119 END TEST accel_compress_verify 00:18:07.119 ************************************ 00:18:07.119 14:37:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:07.119 14:37:15 -- common/autotest_common.sh@10 -- # set +x 00:18:07.119 14:37:15 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:18:07.119 14:37:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:07.119 14:37:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.119 14:37:15 -- common/autotest_common.sh@10 -- # set +x 00:18:07.119 ************************************ 00:18:07.119 START TEST accel_wrong_workload 00:18:07.119 ************************************ 00:18:07.119 14:37:15 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:18:07.119 14:37:15 -- common/autotest_common.sh@638 -- # local es=0 00:18:07.119 14:37:15 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:18:07.119 14:37:15 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:18:07.119 14:37:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.119 14:37:15 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:18:07.119 14:37:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.119 14:37:15 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:18:07.119 14:37:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:18:07.119 14:37:15 -- accel/accel.sh@12 -- # build_accel_config 00:18:07.119 14:37:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:07.119 14:37:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:07.119 14:37:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:07.119 14:37:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:07.119 14:37:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:07.119 14:37:15 -- accel/accel.sh@40 -- # local IFS=, 00:18:07.119 14:37:15 -- accel/accel.sh@41 -- # jq -r . 00:18:07.119 Unsupported workload type: foobar 00:18:07.119 [2024-04-17 14:37:15.692047] app.c:1339:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:18:07.119 accel_perf options: 00:18:07.119 [-h help message] 00:18:07.119 [-q queue depth per core] 00:18:07.119 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:18:07.119 [-T number of threads per core 00:18:07.119 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:18:07.119 [-t time in seconds] 00:18:07.119 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:18:07.119 [ dif_verify, , dif_generate, dif_generate_copy 00:18:07.119 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:18:07.119 [-l for compress/decompress workloads, name of uncompressed input file 00:18:07.119 [-S for crc32c workload, use this seed value (default 0) 00:18:07.119 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:18:07.119 [-f for fill workload, use this BYTE value (default 255) 00:18:07.119 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:18:07.119 [-y verify result if this switch is on] 00:18:07.119 [-a tasks to allocate per core (default: same value as -q)] 00:18:07.119 Can be used to spread operations across a wider range of memory. 00:18:07.377 14:37:15 -- common/autotest_common.sh@641 -- # es=1 00:18:07.377 14:37:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:07.377 14:37:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:07.377 14:37:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:07.377 00:18:07.377 real 0m0.092s 00:18:07.377 user 0m0.084s 00:18:07.377 sys 0m0.051s 00:18:07.377 14:37:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:07.377 14:37:15 -- common/autotest_common.sh@10 -- # set +x 00:18:07.377 ************************************ 00:18:07.377 END TEST accel_wrong_workload 00:18:07.377 ************************************ 00:18:07.377 14:37:15 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:18:07.377 14:37:15 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:18:07.377 14:37:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.377 14:37:15 -- common/autotest_common.sh@10 -- # set +x 00:18:07.377 ************************************ 00:18:07.377 START TEST accel_negative_buffers 00:18:07.377 ************************************ 00:18:07.377 14:37:15 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:18:07.377 14:37:15 -- common/autotest_common.sh@638 -- # local es=0 00:18:07.377 14:37:15 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:18:07.377 14:37:15 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:18:07.377 14:37:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.377 14:37:15 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:18:07.377 14:37:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.377 14:37:15 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:18:07.377 14:37:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:18:07.377 14:37:15 -- accel/accel.sh@12 -- # build_accel_config 00:18:07.378 14:37:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:07.378 14:37:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:07.378 14:37:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:07.378 14:37:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:07.378 14:37:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:07.378 14:37:15 -- accel/accel.sh@40 -- # local IFS=, 00:18:07.378 14:37:15 -- accel/accel.sh@41 -- # jq -r . 00:18:07.378 -x option must be non-negative. 00:18:07.378 [2024-04-17 14:37:15.902558] app.c:1339:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:18:07.378 accel_perf options: 00:18:07.378 [-h help message] 00:18:07.378 [-q queue depth per core] 00:18:07.378 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:18:07.378 [-T number of threads per core 00:18:07.378 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:18:07.378 [-t time in seconds] 00:18:07.378 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:18:07.378 [ dif_verify, , dif_generate, dif_generate_copy 00:18:07.378 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:18:07.378 [-l for compress/decompress workloads, name of uncompressed input file 00:18:07.378 [-S for crc32c workload, use this seed value (default 0) 00:18:07.378 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:18:07.378 [-f for fill workload, use this BYTE value (default 255) 00:18:07.378 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:18:07.378 [-y verify result if this switch is on] 00:18:07.378 [-a tasks to allocate per core (default: same value as -q)] 00:18:07.378 Can be used to spread operations across a wider range of memory. 00:18:07.378 14:37:15 -- common/autotest_common.sh@641 -- # es=1 00:18:07.378 14:37:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:07.378 14:37:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:07.378 14:37:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:07.378 00:18:07.378 real 0m0.073s 00:18:07.378 user 0m0.071s 00:18:07.378 sys 0m0.040s 00:18:07.378 14:37:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:07.378 14:37:15 -- common/autotest_common.sh@10 -- # set +x 00:18:07.378 ************************************ 00:18:07.378 END TEST accel_negative_buffers 00:18:07.378 ************************************ 00:18:07.378 14:37:15 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:18:07.378 14:37:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:18:07.378 14:37:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.378 14:37:15 -- common/autotest_common.sh@10 -- # set +x 00:18:07.637 ************************************ 00:18:07.637 START TEST accel_crc32c 00:18:07.637 ************************************ 00:18:07.637 14:37:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:18:07.638 14:37:16 -- accel/accel.sh@16 -- # local accel_opc 00:18:07.638 14:37:16 -- accel/accel.sh@17 -- # local accel_module 00:18:07.638 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:07.638 14:37:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:18:07.638 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:07.638 14:37:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:18:07.638 14:37:16 -- accel/accel.sh@12 -- # build_accel_config 00:18:07.638 14:37:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:07.638 14:37:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:07.638 14:37:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:07.638 14:37:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:07.638 14:37:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:07.638 14:37:16 -- accel/accel.sh@40 -- # local IFS=, 00:18:07.638 14:37:16 -- accel/accel.sh@41 -- # jq -r . 00:18:07.638 [2024-04-17 14:37:16.104364] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:07.638 [2024-04-17 14:37:16.104684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64922 ] 00:18:07.896 [2024-04-17 14:37:16.276581] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.155 [2024-04-17 14:37:16.630642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val= 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val= 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val=0x1 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val= 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val= 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val=crc32c 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val=32 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val= 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val=software 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@22 -- # accel_module=software 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val=32 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val=32 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val=1 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val=Yes 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val= 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:08.414 14:37:16 -- accel/accel.sh@20 -- # val= 00:18:08.414 14:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:08.414 14:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:10.974 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:10.974 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:10.974 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:10.974 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:10.974 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:10.974 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:10.974 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:10.974 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:10.974 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:10.974 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:10.974 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:10.974 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:10.974 14:37:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:10.974 14:37:19 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:18:10.974 14:37:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:10.974 00:18:10.974 real 0m2.981s 00:18:10.974 user 0m2.663s 00:18:10.974 sys 0m0.215s 00:18:10.974 14:37:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:10.974 14:37:19 -- common/autotest_common.sh@10 -- # set +x 00:18:10.974 ************************************ 00:18:10.974 END TEST accel_crc32c 00:18:10.974 ************************************ 00:18:10.974 14:37:19 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:18:10.974 14:37:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:18:10.974 14:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:10.974 14:37:19 -- common/autotest_common.sh@10 -- # set +x 00:18:10.974 ************************************ 00:18:10.974 START TEST accel_crc32c_C2 00:18:10.974 ************************************ 00:18:10.974 14:37:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:18:10.974 14:37:19 -- accel/accel.sh@16 -- # local accel_opc 00:18:10.974 14:37:19 -- accel/accel.sh@17 -- # local accel_module 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:10.974 14:37:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:18:10.974 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:10.974 14:37:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:18:10.974 14:37:19 -- accel/accel.sh@12 -- # build_accel_config 00:18:10.974 14:37:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:10.974 14:37:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:10.974 14:37:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:10.974 14:37:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:10.974 14:37:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:10.974 14:37:19 -- accel/accel.sh@40 -- # local IFS=, 00:18:10.974 14:37:19 -- accel/accel.sh@41 -- # jq -r . 00:18:10.974 [2024-04-17 14:37:19.224561] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:10.974 [2024-04-17 14:37:19.224877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64978 ] 00:18:10.974 [2024-04-17 14:37:19.389330] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.233 [2024-04-17 14:37:19.650285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val=0x1 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val=crc32c 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val=0 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val=software 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@22 -- # accel_module=software 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val=32 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val=32 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val=1 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val=Yes 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:11.492 14:37:19 -- accel/accel.sh@20 -- # val= 00:18:11.492 14:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:11.492 14:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:13.396 14:37:21 -- accel/accel.sh@20 -- # val= 00:18:13.396 14:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:13.396 14:37:21 -- accel/accel.sh@20 -- # val= 00:18:13.396 14:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:13.396 14:37:21 -- accel/accel.sh@20 -- # val= 00:18:13.396 14:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:13.396 14:37:21 -- accel/accel.sh@20 -- # val= 00:18:13.396 14:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:13.396 14:37:21 -- accel/accel.sh@20 -- # val= 00:18:13.396 14:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:13.396 14:37:21 -- accel/accel.sh@20 -- # val= 00:18:13.396 14:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:13.396 14:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:13.396 14:37:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:13.396 14:37:21 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:18:13.396 14:37:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:13.396 00:18:13.396 real 0m2.787s 00:18:13.396 user 0m2.489s 00:18:13.396 sys 0m0.199s 00:18:13.396 14:37:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:13.396 14:37:21 -- common/autotest_common.sh@10 -- # set +x 00:18:13.396 ************************************ 00:18:13.396 END TEST accel_crc32c_C2 00:18:13.396 ************************************ 00:18:13.655 14:37:22 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:18:13.655 14:37:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:13.655 14:37:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:13.655 14:37:22 -- common/autotest_common.sh@10 -- # set +x 00:18:13.655 ************************************ 00:18:13.655 START TEST accel_copy 00:18:13.655 ************************************ 00:18:13.655 14:37:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:18:13.655 14:37:22 -- accel/accel.sh@16 -- # local accel_opc 00:18:13.655 14:37:22 -- accel/accel.sh@17 -- # local accel_module 00:18:13.655 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:13.655 14:37:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:18:13.655 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:13.655 14:37:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:18:13.655 14:37:22 -- accel/accel.sh@12 -- # build_accel_config 00:18:13.655 14:37:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:13.655 14:37:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:13.655 14:37:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:13.655 14:37:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:13.655 14:37:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:13.655 14:37:22 -- accel/accel.sh@40 -- # local IFS=, 00:18:13.655 14:37:22 -- accel/accel.sh@41 -- # jq -r . 00:18:13.655 [2024-04-17 14:37:22.147135] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:13.655 [2024-04-17 14:37:22.147622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65028 ] 00:18:13.913 [2024-04-17 14:37:22.334164] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.172 [2024-04-17 14:37:22.636859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val= 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val= 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val=0x1 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val= 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val= 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val=copy 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@23 -- # accel_opc=copy 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val= 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val=software 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@22 -- # accel_module=software 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val=32 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val=32 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val=1 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val=Yes 00:18:14.430 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.430 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.430 14:37:22 -- accel/accel.sh@20 -- # val= 00:18:14.431 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.431 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.431 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:14.431 14:37:22 -- accel/accel.sh@20 -- # val= 00:18:14.431 14:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:14.431 14:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:14.431 14:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:16.964 14:37:24 -- accel/accel.sh@20 -- # val= 00:18:16.964 14:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:16.964 14:37:24 -- accel/accel.sh@20 -- # val= 00:18:16.964 14:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:16.964 14:37:24 -- accel/accel.sh@20 -- # val= 00:18:16.964 14:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:16.964 14:37:24 -- accel/accel.sh@20 -- # val= 00:18:16.964 14:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:16.964 14:37:24 -- accel/accel.sh@20 -- # val= 00:18:16.964 14:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:16.964 14:37:24 -- accel/accel.sh@20 -- # val= 00:18:16.964 14:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:16.964 14:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:16.964 14:37:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:16.964 14:37:24 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:18:16.964 14:37:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:16.964 00:18:16.964 real 0m2.905s 00:18:16.964 user 0m2.594s 00:18:16.964 sys 0m0.211s 00:18:16.964 14:37:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:16.964 14:37:24 -- common/autotest_common.sh@10 -- # set +x 00:18:16.964 ************************************ 00:18:16.964 END TEST accel_copy 00:18:16.964 ************************************ 00:18:16.964 14:37:25 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:16.964 14:37:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:18:16.964 14:37:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:16.964 14:37:25 -- common/autotest_common.sh@10 -- # set +x 00:18:16.964 ************************************ 00:18:16.965 START TEST accel_fill 00:18:16.965 ************************************ 00:18:16.965 14:37:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:16.965 14:37:25 -- accel/accel.sh@16 -- # local accel_opc 00:18:16.965 14:37:25 -- accel/accel.sh@17 -- # local accel_module 00:18:16.965 14:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:16.965 14:37:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:16.965 14:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:16.965 14:37:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:16.965 14:37:25 -- accel/accel.sh@12 -- # build_accel_config 00:18:16.965 14:37:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:16.965 14:37:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:16.965 14:37:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:16.965 14:37:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:16.965 14:37:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:16.965 14:37:25 -- accel/accel.sh@40 -- # local IFS=, 00:18:16.965 14:37:25 -- accel/accel.sh@41 -- # jq -r . 00:18:16.965 [2024-04-17 14:37:25.191059] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:16.965 [2024-04-17 14:37:25.191476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65079 ] 00:18:16.965 [2024-04-17 14:37:25.380614] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.223 [2024-04-17 14:37:25.714361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.482 14:37:25 -- accel/accel.sh@20 -- # val= 00:18:17.482 14:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.482 14:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:25 -- accel/accel.sh@20 -- # val= 00:18:17.483 14:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:25 -- accel/accel.sh@20 -- # val=0x1 00:18:17.483 14:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:25 -- accel/accel.sh@20 -- # val= 00:18:17.483 14:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:25 -- accel/accel.sh@20 -- # val= 00:18:17.483 14:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val=fill 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@23 -- # accel_opc=fill 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val=0x80 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val= 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val=software 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@22 -- # accel_module=software 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val=64 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val=64 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val=1 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val=Yes 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val= 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:17.483 14:37:26 -- accel/accel.sh@20 -- # val= 00:18:17.483 14:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:17.483 14:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:20.021 14:37:28 -- accel/accel.sh@20 -- # val= 00:18:20.021 14:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:20.021 14:37:28 -- accel/accel.sh@20 -- # val= 00:18:20.021 14:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:20.021 14:37:28 -- accel/accel.sh@20 -- # val= 00:18:20.021 14:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:20.021 14:37:28 -- accel/accel.sh@20 -- # val= 00:18:20.021 14:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:20.021 14:37:28 -- accel/accel.sh@20 -- # val= 00:18:20.021 14:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:20.021 14:37:28 -- accel/accel.sh@20 -- # val= 00:18:20.021 14:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:20.021 14:37:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:20.021 14:37:28 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:18:20.021 14:37:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:20.021 00:18:20.021 real 0m2.974s 00:18:20.021 user 0m2.639s 00:18:20.021 sys 0m0.233s 00:18:20.021 14:37:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:20.021 14:37:28 -- common/autotest_common.sh@10 -- # set +x 00:18:20.021 ************************************ 00:18:20.021 END TEST accel_fill 00:18:20.021 ************************************ 00:18:20.021 14:37:28 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:18:20.021 14:37:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:20.021 14:37:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:20.021 14:37:28 -- common/autotest_common.sh@10 -- # set +x 00:18:20.021 ************************************ 00:18:20.021 START TEST accel_copy_crc32c 00:18:20.021 ************************************ 00:18:20.021 14:37:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:18:20.021 14:37:28 -- accel/accel.sh@16 -- # local accel_opc 00:18:20.021 14:37:28 -- accel/accel.sh@17 -- # local accel_module 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:20.021 14:37:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:18:20.021 14:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:20.021 14:37:28 -- accel/accel.sh@12 -- # build_accel_config 00:18:20.021 14:37:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:18:20.021 14:37:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:20.021 14:37:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:20.021 14:37:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:20.021 14:37:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:20.021 14:37:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:20.021 14:37:28 -- accel/accel.sh@40 -- # local IFS=, 00:18:20.021 14:37:28 -- accel/accel.sh@41 -- # jq -r . 00:18:20.021 [2024-04-17 14:37:28.270092] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:20.021 [2024-04-17 14:37:28.270801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65136 ] 00:18:20.021 [2024-04-17 14:37:28.444644] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.279 [2024-04-17 14:37:28.763169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val= 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val= 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val=0x1 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val= 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val= 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val=copy_crc32c 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val=0 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val= 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val=software 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@22 -- # accel_module=software 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val=32 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val=32 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val=1 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val=Yes 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val= 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:20.538 14:37:29 -- accel/accel.sh@20 -- # val= 00:18:20.538 14:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:20.538 14:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:23.072 14:37:31 -- accel/accel.sh@20 -- # val= 00:18:23.072 14:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # IFS=: 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # read -r var val 00:18:23.072 14:37:31 -- accel/accel.sh@20 -- # val= 00:18:23.072 14:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # IFS=: 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # read -r var val 00:18:23.072 14:37:31 -- accel/accel.sh@20 -- # val= 00:18:23.072 14:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # IFS=: 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # read -r var val 00:18:23.072 14:37:31 -- accel/accel.sh@20 -- # val= 00:18:23.072 14:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # IFS=: 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # read -r var val 00:18:23.072 14:37:31 -- accel/accel.sh@20 -- # val= 00:18:23.072 14:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # IFS=: 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # read -r var val 00:18:23.072 14:37:31 -- accel/accel.sh@20 -- # val= 00:18:23.072 14:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # IFS=: 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # read -r var val 00:18:23.072 14:37:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:23.072 14:37:31 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:18:23.072 14:37:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:23.072 00:18:23.072 real 0m2.877s 00:18:23.072 user 0m2.575s 00:18:23.072 sys 0m0.201s 00:18:23.072 14:37:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:23.072 14:37:31 -- common/autotest_common.sh@10 -- # set +x 00:18:23.072 ************************************ 00:18:23.072 END TEST accel_copy_crc32c 00:18:23.072 ************************************ 00:18:23.072 14:37:31 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:18:23.072 14:37:31 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:18:23.072 14:37:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:23.072 14:37:31 -- common/autotest_common.sh@10 -- # set +x 00:18:23.072 ************************************ 00:18:23.072 START TEST accel_copy_crc32c_C2 00:18:23.072 ************************************ 00:18:23.072 14:37:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:18:23.072 14:37:31 -- accel/accel.sh@16 -- # local accel_opc 00:18:23.072 14:37:31 -- accel/accel.sh@17 -- # local accel_module 00:18:23.072 14:37:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # IFS=: 00:18:23.072 14:37:31 -- accel/accel.sh@19 -- # read -r var val 00:18:23.072 14:37:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:18:23.072 14:37:31 -- accel/accel.sh@12 -- # build_accel_config 00:18:23.072 14:37:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:23.072 14:37:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:23.072 14:37:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:23.072 14:37:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:23.072 14:37:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:23.072 14:37:31 -- accel/accel.sh@40 -- # local IFS=, 00:18:23.072 14:37:31 -- accel/accel.sh@41 -- # jq -r . 00:18:23.072 [2024-04-17 14:37:31.280530] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:23.072 [2024-04-17 14:37:31.280924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65192 ] 00:18:23.072 [2024-04-17 14:37:31.466205] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.331 [2024-04-17 14:37:31.788877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val= 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val= 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val=0x1 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val= 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val= 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val=copy_crc32c 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val=0 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val='8192 bytes' 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val= 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val=software 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@22 -- # accel_module=software 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val=32 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val=32 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val=1 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val=Yes 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val= 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:23.590 14:37:32 -- accel/accel.sh@20 -- # val= 00:18:23.590 14:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # IFS=: 00:18:23.590 14:37:32 -- accel/accel.sh@19 -- # read -r var val 00:18:26.121 14:37:34 -- accel/accel.sh@20 -- # val= 00:18:26.121 14:37:34 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.121 14:37:34 -- accel/accel.sh@19 -- # IFS=: 00:18:26.121 14:37:34 -- accel/accel.sh@19 -- # read -r var val 00:18:26.121 14:37:34 -- accel/accel.sh@20 -- # val= 00:18:26.121 14:37:34 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.121 14:37:34 -- accel/accel.sh@19 -- # IFS=: 00:18:26.121 14:37:34 -- accel/accel.sh@19 -- # read -r var val 00:18:26.121 14:37:34 -- accel/accel.sh@20 -- # val= 00:18:26.121 14:37:34 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.121 14:37:34 -- accel/accel.sh@19 -- # IFS=: 00:18:26.121 14:37:34 -- accel/accel.sh@19 -- # read -r var val 00:18:26.122 14:37:34 -- accel/accel.sh@20 -- # val= 00:18:26.122 14:37:34 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.122 14:37:34 -- accel/accel.sh@19 -- # IFS=: 00:18:26.122 14:37:34 -- accel/accel.sh@19 -- # read -r var val 00:18:26.122 14:37:34 -- accel/accel.sh@20 -- # val= 00:18:26.122 14:37:34 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.122 14:37:34 -- accel/accel.sh@19 -- # IFS=: 00:18:26.122 14:37:34 -- accel/accel.sh@19 -- # read -r var val 00:18:26.122 14:37:34 -- accel/accel.sh@20 -- # val= 00:18:26.122 14:37:34 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.122 14:37:34 -- accel/accel.sh@19 -- # IFS=: 00:18:26.122 14:37:34 -- accel/accel.sh@19 -- # read -r var val 00:18:26.122 14:37:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:26.122 14:37:34 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:18:26.122 14:37:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:26.122 00:18:26.122 real 0m2.961s 00:18:26.122 user 0m2.634s 00:18:26.122 sys 0m0.227s 00:18:26.122 14:37:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:26.122 14:37:34 -- common/autotest_common.sh@10 -- # set +x 00:18:26.122 ************************************ 00:18:26.122 END TEST accel_copy_crc32c_C2 00:18:26.122 ************************************ 00:18:26.122 14:37:34 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:18:26.122 14:37:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:26.122 14:37:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:26.122 14:37:34 -- common/autotest_common.sh@10 -- # set +x 00:18:26.122 ************************************ 00:18:26.122 START TEST accel_dualcast 00:18:26.122 ************************************ 00:18:26.122 14:37:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:18:26.122 14:37:34 -- accel/accel.sh@16 -- # local accel_opc 00:18:26.122 14:37:34 -- accel/accel.sh@17 -- # local accel_module 00:18:26.122 14:37:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:18:26.122 14:37:34 -- accel/accel.sh@19 -- # IFS=: 00:18:26.122 14:37:34 -- accel/accel.sh@19 -- # read -r var val 00:18:26.122 14:37:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:18:26.122 14:37:34 -- accel/accel.sh@12 -- # build_accel_config 00:18:26.122 14:37:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:26.122 14:37:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:26.122 14:37:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:26.122 14:37:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:26.122 14:37:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:26.122 14:37:34 -- accel/accel.sh@40 -- # local IFS=, 00:18:26.122 14:37:34 -- accel/accel.sh@41 -- # jq -r . 00:18:26.122 [2024-04-17 14:37:34.358125] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:26.122 [2024-04-17 14:37:34.358484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65248 ] 00:18:26.122 [2024-04-17 14:37:34.522831] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.380 [2024-04-17 14:37:34.787782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.638 14:37:35 -- accel/accel.sh@20 -- # val= 00:18:26.638 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.638 14:37:35 -- accel/accel.sh@20 -- # val= 00:18:26.638 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.638 14:37:35 -- accel/accel.sh@20 -- # val=0x1 00:18:26.638 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.638 14:37:35 -- accel/accel.sh@20 -- # val= 00:18:26.638 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.638 14:37:35 -- accel/accel.sh@20 -- # val= 00:18:26.638 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.638 14:37:35 -- accel/accel.sh@20 -- # val=dualcast 00:18:26.638 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.638 14:37:35 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.638 14:37:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:26.638 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.638 14:37:35 -- accel/accel.sh@20 -- # val= 00:18:26.638 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.638 14:37:35 -- accel/accel.sh@20 -- # val=software 00:18:26.638 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.638 14:37:35 -- accel/accel.sh@22 -- # accel_module=software 00:18:26.638 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.639 14:37:35 -- accel/accel.sh@20 -- # val=32 00:18:26.639 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.639 14:37:35 -- accel/accel.sh@20 -- # val=32 00:18:26.639 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.639 14:37:35 -- accel/accel.sh@20 -- # val=1 00:18:26.639 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.639 14:37:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:26.639 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.639 14:37:35 -- accel/accel.sh@20 -- # val=Yes 00:18:26.639 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.639 14:37:35 -- accel/accel.sh@20 -- # val= 00:18:26.639 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:26.639 14:37:35 -- accel/accel.sh@20 -- # val= 00:18:26.639 14:37:35 -- accel/accel.sh@21 -- # case "$var" in 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # IFS=: 00:18:26.639 14:37:35 -- accel/accel.sh@19 -- # read -r var val 00:18:29.172 14:37:37 -- accel/accel.sh@20 -- # val= 00:18:29.172 14:37:37 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # IFS=: 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # read -r var val 00:18:29.172 14:37:37 -- accel/accel.sh@20 -- # val= 00:18:29.172 14:37:37 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # IFS=: 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # read -r var val 00:18:29.172 14:37:37 -- accel/accel.sh@20 -- # val= 00:18:29.172 14:37:37 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # IFS=: 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # read -r var val 00:18:29.172 14:37:37 -- accel/accel.sh@20 -- # val= 00:18:29.172 14:37:37 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # IFS=: 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # read -r var val 00:18:29.172 14:37:37 -- accel/accel.sh@20 -- # val= 00:18:29.172 14:37:37 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # IFS=: 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # read -r var val 00:18:29.172 14:37:37 -- accel/accel.sh@20 -- # val= 00:18:29.172 14:37:37 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # IFS=: 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # read -r var val 00:18:29.172 14:37:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:29.172 14:37:37 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:18:29.172 14:37:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:29.172 00:18:29.172 real 0m2.868s 00:18:29.172 user 0m2.571s 00:18:29.172 sys 0m0.193s 00:18:29.172 14:37:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:29.172 14:37:37 -- common/autotest_common.sh@10 -- # set +x 00:18:29.172 ************************************ 00:18:29.172 END TEST accel_dualcast 00:18:29.172 ************************************ 00:18:29.172 14:37:37 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:18:29.172 14:37:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:29.172 14:37:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:29.172 14:37:37 -- common/autotest_common.sh@10 -- # set +x 00:18:29.172 ************************************ 00:18:29.172 START TEST accel_compare 00:18:29.172 ************************************ 00:18:29.172 14:37:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:18:29.172 14:37:37 -- accel/accel.sh@16 -- # local accel_opc 00:18:29.172 14:37:37 -- accel/accel.sh@17 -- # local accel_module 00:18:29.172 14:37:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # IFS=: 00:18:29.172 14:37:37 -- accel/accel.sh@19 -- # read -r var val 00:18:29.172 14:37:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:18:29.172 14:37:37 -- accel/accel.sh@12 -- # build_accel_config 00:18:29.172 14:37:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:29.172 14:37:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:29.172 14:37:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:29.172 14:37:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:29.172 14:37:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:29.172 14:37:37 -- accel/accel.sh@40 -- # local IFS=, 00:18:29.173 14:37:37 -- accel/accel.sh@41 -- # jq -r . 00:18:29.173 [2024-04-17 14:37:37.382126] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:29.173 [2024-04-17 14:37:37.382522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65304 ] 00:18:29.173 [2024-04-17 14:37:37.574594] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.431 [2024-04-17 14:37:37.916650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.688 14:37:38 -- accel/accel.sh@20 -- # val= 00:18:29.688 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.688 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.688 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.688 14:37:38 -- accel/accel.sh@20 -- # val= 00:18:29.688 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.688 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.688 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.688 14:37:38 -- accel/accel.sh@20 -- # val=0x1 00:18:29.688 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.688 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.688 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.688 14:37:38 -- accel/accel.sh@20 -- # val= 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val= 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val=compare 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@23 -- # accel_opc=compare 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val= 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val=software 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@22 -- # accel_module=software 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val=32 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val=32 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val=1 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val=Yes 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val= 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:29.689 14:37:38 -- accel/accel.sh@20 -- # val= 00:18:29.689 14:37:38 -- accel/accel.sh@21 -- # case "$var" in 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # IFS=: 00:18:29.689 14:37:38 -- accel/accel.sh@19 -- # read -r var val 00:18:32.220 14:37:40 -- accel/accel.sh@20 -- # val= 00:18:32.220 14:37:40 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # IFS=: 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # read -r var val 00:18:32.220 14:37:40 -- accel/accel.sh@20 -- # val= 00:18:32.220 14:37:40 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # IFS=: 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # read -r var val 00:18:32.220 14:37:40 -- accel/accel.sh@20 -- # val= 00:18:32.220 14:37:40 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # IFS=: 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # read -r var val 00:18:32.220 14:37:40 -- accel/accel.sh@20 -- # val= 00:18:32.220 14:37:40 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # IFS=: 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # read -r var val 00:18:32.220 14:37:40 -- accel/accel.sh@20 -- # val= 00:18:32.220 14:37:40 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # IFS=: 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # read -r var val 00:18:32.220 14:37:40 -- accel/accel.sh@20 -- # val= 00:18:32.220 14:37:40 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # IFS=: 00:18:32.220 14:37:40 -- accel/accel.sh@19 -- # read -r var val 00:18:32.220 14:37:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:32.220 14:37:40 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:18:32.220 14:37:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:32.220 00:18:32.220 real 0m2.961s 00:18:32.220 user 0m2.653s 00:18:32.220 sys 0m0.208s 00:18:32.220 14:37:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:32.220 14:37:40 -- common/autotest_common.sh@10 -- # set +x 00:18:32.220 ************************************ 00:18:32.220 END TEST accel_compare 00:18:32.220 ************************************ 00:18:32.220 14:37:40 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:18:32.220 14:37:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:32.220 14:37:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:32.220 14:37:40 -- common/autotest_common.sh@10 -- # set +x 00:18:32.220 ************************************ 00:18:32.220 START TEST accel_xor 00:18:32.221 ************************************ 00:18:32.221 14:37:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:18:32.221 14:37:40 -- accel/accel.sh@16 -- # local accel_opc 00:18:32.221 14:37:40 -- accel/accel.sh@17 -- # local accel_module 00:18:32.221 14:37:40 -- accel/accel.sh@19 -- # IFS=: 00:18:32.221 14:37:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:18:32.221 14:37:40 -- accel/accel.sh@19 -- # read -r var val 00:18:32.221 14:37:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:18:32.221 14:37:40 -- accel/accel.sh@12 -- # build_accel_config 00:18:32.221 14:37:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:32.221 14:37:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:32.221 14:37:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:32.221 14:37:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:32.221 14:37:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:32.221 14:37:40 -- accel/accel.sh@40 -- # local IFS=, 00:18:32.221 14:37:40 -- accel/accel.sh@41 -- # jq -r . 00:18:32.221 [2024-04-17 14:37:40.461995] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:32.221 [2024-04-17 14:37:40.462336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65360 ] 00:18:32.221 [2024-04-17 14:37:40.635876] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.491 [2024-04-17 14:37:40.954277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val= 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val= 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val=0x1 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val= 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val= 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val=xor 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@23 -- # accel_opc=xor 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val=2 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val= 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val=software 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@22 -- # accel_module=software 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val=32 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val=32 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val=1 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val=Yes 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val= 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:32.794 14:37:41 -- accel/accel.sh@20 -- # val= 00:18:32.794 14:37:41 -- accel/accel.sh@21 -- # case "$var" in 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # IFS=: 00:18:32.794 14:37:41 -- accel/accel.sh@19 -- # read -r var val 00:18:35.347 14:37:43 -- accel/accel.sh@20 -- # val= 00:18:35.347 14:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # IFS=: 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # read -r var val 00:18:35.347 14:37:43 -- accel/accel.sh@20 -- # val= 00:18:35.347 14:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # IFS=: 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # read -r var val 00:18:35.347 14:37:43 -- accel/accel.sh@20 -- # val= 00:18:35.347 14:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # IFS=: 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # read -r var val 00:18:35.347 14:37:43 -- accel/accel.sh@20 -- # val= 00:18:35.347 14:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # IFS=: 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # read -r var val 00:18:35.347 14:37:43 -- accel/accel.sh@20 -- # val= 00:18:35.347 14:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # IFS=: 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # read -r var val 00:18:35.347 14:37:43 -- accel/accel.sh@20 -- # val= 00:18:35.347 14:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # IFS=: 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # read -r var val 00:18:35.347 14:37:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:35.347 14:37:43 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:18:35.347 14:37:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:35.347 00:18:35.347 real 0m2.941s 00:18:35.347 user 0m2.639s 00:18:35.347 sys 0m0.203s 00:18:35.347 14:37:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:35.347 14:37:43 -- common/autotest_common.sh@10 -- # set +x 00:18:35.347 ************************************ 00:18:35.347 END TEST accel_xor 00:18:35.347 ************************************ 00:18:35.347 14:37:43 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:18:35.347 14:37:43 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:18:35.347 14:37:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:35.347 14:37:43 -- common/autotest_common.sh@10 -- # set +x 00:18:35.347 ************************************ 00:18:35.347 START TEST accel_xor 00:18:35.347 ************************************ 00:18:35.347 14:37:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:18:35.347 14:37:43 -- accel/accel.sh@16 -- # local accel_opc 00:18:35.347 14:37:43 -- accel/accel.sh@17 -- # local accel_module 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # IFS=: 00:18:35.347 14:37:43 -- accel/accel.sh@19 -- # read -r var val 00:18:35.347 14:37:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:18:35.347 14:37:43 -- accel/accel.sh@12 -- # build_accel_config 00:18:35.347 14:37:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:18:35.347 14:37:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:35.347 14:37:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:35.347 14:37:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:35.347 14:37:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:35.347 14:37:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:35.347 14:37:43 -- accel/accel.sh@40 -- # local IFS=, 00:18:35.347 14:37:43 -- accel/accel.sh@41 -- # jq -r . 00:18:35.347 [2024-04-17 14:37:43.545537] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:35.347 [2024-04-17 14:37:43.545966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65417 ] 00:18:35.347 [2024-04-17 14:37:43.731955] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.605 [2024-04-17 14:37:43.993021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val= 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val= 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val=0x1 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val= 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val= 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val=xor 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@23 -- # accel_opc=xor 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val=3 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val= 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val=software 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@22 -- # accel_module=software 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val=32 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val=32 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val=1 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val=Yes 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val= 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:35.863 14:37:44 -- accel/accel.sh@20 -- # val= 00:18:35.863 14:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # IFS=: 00:18:35.863 14:37:44 -- accel/accel.sh@19 -- # read -r var val 00:18:38.406 14:37:46 -- accel/accel.sh@20 -- # val= 00:18:38.406 14:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # IFS=: 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # read -r var val 00:18:38.406 14:37:46 -- accel/accel.sh@20 -- # val= 00:18:38.406 14:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # IFS=: 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # read -r var val 00:18:38.406 14:37:46 -- accel/accel.sh@20 -- # val= 00:18:38.406 14:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # IFS=: 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # read -r var val 00:18:38.406 14:37:46 -- accel/accel.sh@20 -- # val= 00:18:38.406 14:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # IFS=: 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # read -r var val 00:18:38.406 14:37:46 -- accel/accel.sh@20 -- # val= 00:18:38.406 14:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # IFS=: 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # read -r var val 00:18:38.406 14:37:46 -- accel/accel.sh@20 -- # val= 00:18:38.406 14:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # IFS=: 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # read -r var val 00:18:38.406 14:37:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:38.406 14:37:46 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:18:38.406 14:37:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:38.406 00:18:38.406 real 0m2.912s 00:18:38.406 user 0m2.617s 00:18:38.406 sys 0m0.193s 00:18:38.406 14:37:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:38.406 14:37:46 -- common/autotest_common.sh@10 -- # set +x 00:18:38.406 ************************************ 00:18:38.406 END TEST accel_xor 00:18:38.406 ************************************ 00:18:38.406 14:37:46 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:18:38.406 14:37:46 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:38.406 14:37:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:38.406 14:37:46 -- common/autotest_common.sh@10 -- # set +x 00:18:38.406 ************************************ 00:18:38.406 START TEST accel_dif_verify 00:18:38.406 ************************************ 00:18:38.406 14:37:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:18:38.406 14:37:46 -- accel/accel.sh@16 -- # local accel_opc 00:18:38.406 14:37:46 -- accel/accel.sh@17 -- # local accel_module 00:18:38.406 14:37:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # IFS=: 00:18:38.406 14:37:46 -- accel/accel.sh@19 -- # read -r var val 00:18:38.406 14:37:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:18:38.406 14:37:46 -- accel/accel.sh@12 -- # build_accel_config 00:18:38.406 14:37:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:38.406 14:37:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:38.406 14:37:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:38.406 14:37:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:38.406 14:37:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:38.406 14:37:46 -- accel/accel.sh@40 -- # local IFS=, 00:18:38.406 14:37:46 -- accel/accel.sh@41 -- # jq -r . 00:18:38.406 [2024-04-17 14:37:46.583141] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:38.406 [2024-04-17 14:37:46.583310] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65473 ] 00:18:38.406 [2024-04-17 14:37:46.768178] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.664 [2024-04-17 14:37:47.019606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.923 14:37:47 -- accel/accel.sh@20 -- # val= 00:18:38.923 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.923 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.923 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.923 14:37:47 -- accel/accel.sh@20 -- # val= 00:18:38.923 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val=0x1 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val= 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val= 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val=dif_verify 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val='512 bytes' 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val='8 bytes' 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val= 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val=software 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@22 -- # accel_module=software 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val=32 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val=32 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val=1 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val=No 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val= 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:38.924 14:37:47 -- accel/accel.sh@20 -- # val= 00:18:38.924 14:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # IFS=: 00:18:38.924 14:37:47 -- accel/accel.sh@19 -- # read -r var val 00:18:40.828 14:37:49 -- accel/accel.sh@20 -- # val= 00:18:40.828 14:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # IFS=: 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # read -r var val 00:18:40.828 14:37:49 -- accel/accel.sh@20 -- # val= 00:18:40.828 14:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # IFS=: 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # read -r var val 00:18:40.828 14:37:49 -- accel/accel.sh@20 -- # val= 00:18:40.828 14:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # IFS=: 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # read -r var val 00:18:40.828 14:37:49 -- accel/accel.sh@20 -- # val= 00:18:40.828 14:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # IFS=: 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # read -r var val 00:18:40.828 14:37:49 -- accel/accel.sh@20 -- # val= 00:18:40.828 14:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # IFS=: 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # read -r var val 00:18:40.828 14:37:49 -- accel/accel.sh@20 -- # val= 00:18:40.828 14:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # IFS=: 00:18:40.828 14:37:49 -- accel/accel.sh@19 -- # read -r var val 00:18:40.828 14:37:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:40.828 14:37:49 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:18:40.828 14:37:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:40.828 00:18:40.828 real 0m2.839s 00:18:40.828 user 0m2.526s 00:18:40.828 sys 0m0.210s 00:18:40.828 14:37:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:40.828 14:37:49 -- common/autotest_common.sh@10 -- # set +x 00:18:40.828 ************************************ 00:18:40.828 END TEST accel_dif_verify 00:18:40.828 ************************************ 00:18:40.828 14:37:49 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:18:40.828 14:37:49 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:40.828 14:37:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:40.828 14:37:49 -- common/autotest_common.sh@10 -- # set +x 00:18:41.087 ************************************ 00:18:41.087 START TEST accel_dif_generate 00:18:41.087 ************************************ 00:18:41.087 14:37:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:18:41.087 14:37:49 -- accel/accel.sh@16 -- # local accel_opc 00:18:41.087 14:37:49 -- accel/accel.sh@17 -- # local accel_module 00:18:41.087 14:37:49 -- accel/accel.sh@19 -- # IFS=: 00:18:41.087 14:37:49 -- accel/accel.sh@19 -- # read -r var val 00:18:41.087 14:37:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:18:41.087 14:37:49 -- accel/accel.sh@12 -- # build_accel_config 00:18:41.087 14:37:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:18:41.087 14:37:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:41.087 14:37:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:41.087 14:37:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:41.087 14:37:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:41.087 14:37:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:41.087 14:37:49 -- accel/accel.sh@40 -- # local IFS=, 00:18:41.087 14:37:49 -- accel/accel.sh@41 -- # jq -r . 00:18:41.087 [2024-04-17 14:37:49.561809] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:41.087 [2024-04-17 14:37:49.561975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65525 ] 00:18:41.345 [2024-04-17 14:37:49.750029] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.604 [2024-04-17 14:37:50.056016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val= 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val= 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val=0x1 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val= 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val= 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val=dif_generate 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val='512 bytes' 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val='8 bytes' 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val= 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val=software 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@22 -- # accel_module=software 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val=32 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val=32 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val=1 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val=No 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val= 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:41.863 14:37:50 -- accel/accel.sh@20 -- # val= 00:18:41.863 14:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # IFS=: 00:18:41.863 14:37:50 -- accel/accel.sh@19 -- # read -r var val 00:18:43.810 14:37:52 -- accel/accel.sh@20 -- # val= 00:18:43.810 14:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.810 14:37:52 -- accel/accel.sh@19 -- # IFS=: 00:18:43.810 14:37:52 -- accel/accel.sh@19 -- # read -r var val 00:18:43.810 14:37:52 -- accel/accel.sh@20 -- # val= 00:18:43.810 14:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.810 14:37:52 -- accel/accel.sh@19 -- # IFS=: 00:18:43.810 14:37:52 -- accel/accel.sh@19 -- # read -r var val 00:18:43.810 14:37:52 -- accel/accel.sh@20 -- # val= 00:18:43.810 14:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.810 14:37:52 -- accel/accel.sh@19 -- # IFS=: 00:18:43.810 14:37:52 -- accel/accel.sh@19 -- # read -r var val 00:18:43.810 14:37:52 -- accel/accel.sh@20 -- # val= 00:18:43.810 14:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.811 14:37:52 -- accel/accel.sh@19 -- # IFS=: 00:18:43.811 14:37:52 -- accel/accel.sh@19 -- # read -r var val 00:18:43.811 14:37:52 -- accel/accel.sh@20 -- # val= 00:18:43.811 14:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.811 14:37:52 -- accel/accel.sh@19 -- # IFS=: 00:18:43.811 14:37:52 -- accel/accel.sh@19 -- # read -r var val 00:18:43.811 14:37:52 -- accel/accel.sh@20 -- # val= 00:18:43.811 14:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.811 14:37:52 -- accel/accel.sh@19 -- # IFS=: 00:18:43.811 14:37:52 -- accel/accel.sh@19 -- # read -r var val 00:18:43.811 14:37:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:43.811 ************************************ 00:18:43.811 END TEST accel_dif_generate 00:18:43.811 ************************************ 00:18:43.811 14:37:52 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:18:43.811 14:37:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:43.811 00:18:43.811 real 0m2.889s 00:18:43.811 user 0m2.582s 00:18:43.811 sys 0m0.208s 00:18:43.811 14:37:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:43.811 14:37:52 -- common/autotest_common.sh@10 -- # set +x 00:18:44.070 14:37:52 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:18:44.070 14:37:52 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:44.070 14:37:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:44.070 14:37:52 -- common/autotest_common.sh@10 -- # set +x 00:18:44.070 ************************************ 00:18:44.070 START TEST accel_dif_generate_copy 00:18:44.070 ************************************ 00:18:44.070 14:37:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:18:44.070 14:37:52 -- accel/accel.sh@16 -- # local accel_opc 00:18:44.070 14:37:52 -- accel/accel.sh@17 -- # local accel_module 00:18:44.070 14:37:52 -- accel/accel.sh@19 -- # IFS=: 00:18:44.070 14:37:52 -- accel/accel.sh@19 -- # read -r var val 00:18:44.070 14:37:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:18:44.070 14:37:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:18:44.070 14:37:52 -- accel/accel.sh@12 -- # build_accel_config 00:18:44.071 14:37:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:44.071 14:37:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:44.071 14:37:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:44.071 14:37:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:44.071 14:37:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:44.071 14:37:52 -- accel/accel.sh@40 -- # local IFS=, 00:18:44.071 14:37:52 -- accel/accel.sh@41 -- # jq -r . 00:18:44.071 [2024-04-17 14:37:52.565079] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:44.071 [2024-04-17 14:37:52.565244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65581 ] 00:18:44.330 [2024-04-17 14:37:52.744858] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.653 [2024-04-17 14:37:52.994419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val= 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val= 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val=0x1 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val= 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val= 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val= 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val=software 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@22 -- # accel_module=software 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val=32 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val=32 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val=1 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val=No 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val= 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:44.918 14:37:53 -- accel/accel.sh@20 -- # val= 00:18:44.918 14:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # IFS=: 00:18:44.918 14:37:53 -- accel/accel.sh@19 -- # read -r var val 00:18:46.821 14:37:55 -- accel/accel.sh@20 -- # val= 00:18:46.821 14:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # IFS=: 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # read -r var val 00:18:46.821 14:37:55 -- accel/accel.sh@20 -- # val= 00:18:46.821 14:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # IFS=: 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # read -r var val 00:18:46.821 14:37:55 -- accel/accel.sh@20 -- # val= 00:18:46.821 14:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # IFS=: 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # read -r var val 00:18:46.821 14:37:55 -- accel/accel.sh@20 -- # val= 00:18:46.821 14:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # IFS=: 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # read -r var val 00:18:46.821 14:37:55 -- accel/accel.sh@20 -- # val= 00:18:46.821 14:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # IFS=: 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # read -r var val 00:18:46.821 14:37:55 -- accel/accel.sh@20 -- # val= 00:18:46.821 14:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # IFS=: 00:18:46.821 14:37:55 -- accel/accel.sh@19 -- # read -r var val 00:18:46.821 14:37:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:46.821 14:37:55 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:18:46.821 ************************************ 00:18:46.821 END TEST accel_dif_generate_copy 00:18:46.821 ************************************ 00:18:46.821 14:37:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:46.821 00:18:46.821 real 0m2.857s 00:18:46.821 user 0m2.552s 00:18:46.821 sys 0m0.196s 00:18:46.821 14:37:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:46.821 14:37:55 -- common/autotest_common.sh@10 -- # set +x 00:18:46.821 14:37:55 -- accel/accel.sh@115 -- # [[ y == y ]] 00:18:46.821 14:37:55 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:46.821 14:37:55 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:18:46.821 14:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.821 14:37:55 -- common/autotest_common.sh@10 -- # set +x 00:18:47.079 ************************************ 00:18:47.079 START TEST accel_comp 00:18:47.079 ************************************ 00:18:47.079 14:37:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:47.079 14:37:55 -- accel/accel.sh@16 -- # local accel_opc 00:18:47.079 14:37:55 -- accel/accel.sh@17 -- # local accel_module 00:18:47.079 14:37:55 -- accel/accel.sh@19 -- # IFS=: 00:18:47.079 14:37:55 -- accel/accel.sh@19 -- # read -r var val 00:18:47.079 14:37:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:47.079 14:37:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:47.079 14:37:55 -- accel/accel.sh@12 -- # build_accel_config 00:18:47.079 14:37:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:47.079 14:37:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:47.079 14:37:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:47.079 14:37:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:47.079 14:37:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:47.079 14:37:55 -- accel/accel.sh@40 -- # local IFS=, 00:18:47.079 14:37:55 -- accel/accel.sh@41 -- # jq -r . 00:18:47.079 [2024-04-17 14:37:55.525669] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:47.079 [2024-04-17 14:37:55.525782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65637 ] 00:18:47.338 [2024-04-17 14:37:55.691301] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.597 [2024-04-17 14:37:55.957420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val= 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val= 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val= 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val=0x1 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val= 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val= 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val=compress 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@23 -- # accel_opc=compress 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val= 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val=software 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@22 -- # accel_module=software 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val=32 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val=32 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val=1 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val=No 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val= 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:47.856 14:37:56 -- accel/accel.sh@20 -- # val= 00:18:47.856 14:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # IFS=: 00:18:47.856 14:37:56 -- accel/accel.sh@19 -- # read -r var val 00:18:49.828 14:37:58 -- accel/accel.sh@20 -- # val= 00:18:49.828 14:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # IFS=: 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # read -r var val 00:18:49.828 14:37:58 -- accel/accel.sh@20 -- # val= 00:18:49.828 14:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # IFS=: 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # read -r var val 00:18:49.828 14:37:58 -- accel/accel.sh@20 -- # val= 00:18:49.828 14:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # IFS=: 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # read -r var val 00:18:49.828 14:37:58 -- accel/accel.sh@20 -- # val= 00:18:49.828 14:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # IFS=: 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # read -r var val 00:18:49.828 14:37:58 -- accel/accel.sh@20 -- # val= 00:18:49.828 14:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # IFS=: 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # read -r var val 00:18:49.828 14:37:58 -- accel/accel.sh@20 -- # val= 00:18:49.828 14:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # IFS=: 00:18:49.828 14:37:58 -- accel/accel.sh@19 -- # read -r var val 00:18:49.828 14:37:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:49.828 14:37:58 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:18:49.828 14:37:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:49.828 00:18:49.828 real 0m2.858s 00:18:49.828 user 0m2.559s 00:18:49.828 sys 0m0.200s 00:18:49.828 14:37:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:49.828 ************************************ 00:18:49.828 END TEST accel_comp 00:18:49.828 ************************************ 00:18:49.828 14:37:58 -- common/autotest_common.sh@10 -- # set +x 00:18:49.828 14:37:58 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:49.828 14:37:58 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:18:49.828 14:37:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:49.828 14:37:58 -- common/autotest_common.sh@10 -- # set +x 00:18:50.087 ************************************ 00:18:50.087 START TEST accel_decomp 00:18:50.087 ************************************ 00:18:50.087 14:37:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:50.087 14:37:58 -- accel/accel.sh@16 -- # local accel_opc 00:18:50.087 14:37:58 -- accel/accel.sh@17 -- # local accel_module 00:18:50.087 14:37:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:50.087 14:37:58 -- accel/accel.sh@19 -- # IFS=: 00:18:50.087 14:37:58 -- accel/accel.sh@19 -- # read -r var val 00:18:50.087 14:37:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:50.087 14:37:58 -- accel/accel.sh@12 -- # build_accel_config 00:18:50.087 14:37:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:50.087 14:37:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:50.087 14:37:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:50.087 14:37:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:50.087 14:37:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:50.087 14:37:58 -- accel/accel.sh@40 -- # local IFS=, 00:18:50.087 14:37:58 -- accel/accel.sh@41 -- # jq -r . 00:18:50.087 [2024-04-17 14:37:58.518286] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:50.087 [2024-04-17 14:37:58.518688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65695 ] 00:18:50.346 [2024-04-17 14:37:58.694382] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.605 [2024-04-17 14:37:58.953572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val= 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val= 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val= 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val=0x1 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val= 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val= 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val=decompress 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val= 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val=software 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@22 -- # accel_module=software 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val=32 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val=32 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val=1 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val=Yes 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.865 14:37:59 -- accel/accel.sh@20 -- # val= 00:18:50.865 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.865 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:50.866 14:37:59 -- accel/accel.sh@20 -- # val= 00:18:50.866 14:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:18:50.866 14:37:59 -- accel/accel.sh@19 -- # IFS=: 00:18:50.866 14:37:59 -- accel/accel.sh@19 -- # read -r var val 00:18:53.402 14:38:01 -- accel/accel.sh@20 -- # val= 00:18:53.402 14:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # IFS=: 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # read -r var val 00:18:53.402 14:38:01 -- accel/accel.sh@20 -- # val= 00:18:53.402 14:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # IFS=: 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # read -r var val 00:18:53.402 14:38:01 -- accel/accel.sh@20 -- # val= 00:18:53.402 14:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # IFS=: 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # read -r var val 00:18:53.402 14:38:01 -- accel/accel.sh@20 -- # val= 00:18:53.402 14:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # IFS=: 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # read -r var val 00:18:53.402 14:38:01 -- accel/accel.sh@20 -- # val= 00:18:53.402 14:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # IFS=: 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # read -r var val 00:18:53.402 14:38:01 -- accel/accel.sh@20 -- # val= 00:18:53.402 14:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # IFS=: 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # read -r var val 00:18:53.402 14:38:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:53.402 14:38:01 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:53.402 14:38:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:53.402 00:18:53.402 real 0m2.949s 00:18:53.402 user 0m2.640s 00:18:53.402 sys 0m0.208s 00:18:53.402 14:38:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:53.402 14:38:01 -- common/autotest_common.sh@10 -- # set +x 00:18:53.402 ************************************ 00:18:53.402 END TEST accel_decomp 00:18:53.402 ************************************ 00:18:53.402 14:38:01 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:53.402 14:38:01 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:18:53.402 14:38:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:53.402 14:38:01 -- common/autotest_common.sh@10 -- # set +x 00:18:53.402 ************************************ 00:18:53.402 START TEST accel_decmop_full 00:18:53.402 ************************************ 00:18:53.402 14:38:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:53.402 14:38:01 -- accel/accel.sh@16 -- # local accel_opc 00:18:53.402 14:38:01 -- accel/accel.sh@17 -- # local accel_module 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # IFS=: 00:18:53.402 14:38:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:53.402 14:38:01 -- accel/accel.sh@19 -- # read -r var val 00:18:53.402 14:38:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:53.402 14:38:01 -- accel/accel.sh@12 -- # build_accel_config 00:18:53.402 14:38:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:53.402 14:38:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:53.402 14:38:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:53.402 14:38:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:53.402 14:38:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:53.402 14:38:01 -- accel/accel.sh@40 -- # local IFS=, 00:18:53.402 14:38:01 -- accel/accel.sh@41 -- # jq -r . 00:18:53.402 [2024-04-17 14:38:01.600223] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:53.402 [2024-04-17 14:38:01.600753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65750 ] 00:18:53.402 [2024-04-17 14:38:01.784466] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.661 [2024-04-17 14:38:02.051147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val= 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val= 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val= 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val=0x1 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val= 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val= 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val=decompress 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val='111250 bytes' 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val= 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val=software 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@22 -- # accel_module=software 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val=32 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val=32 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val=1 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val=Yes 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val= 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:53.920 14:38:02 -- accel/accel.sh@20 -- # val= 00:18:53.920 14:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # IFS=: 00:18:53.920 14:38:02 -- accel/accel.sh@19 -- # read -r var val 00:18:56.449 14:38:04 -- accel/accel.sh@20 -- # val= 00:18:56.449 14:38:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # IFS=: 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # read -r var val 00:18:56.449 14:38:04 -- accel/accel.sh@20 -- # val= 00:18:56.449 14:38:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # IFS=: 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # read -r var val 00:18:56.449 14:38:04 -- accel/accel.sh@20 -- # val= 00:18:56.449 14:38:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # IFS=: 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # read -r var val 00:18:56.449 14:38:04 -- accel/accel.sh@20 -- # val= 00:18:56.449 14:38:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # IFS=: 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # read -r var val 00:18:56.449 14:38:04 -- accel/accel.sh@20 -- # val= 00:18:56.449 14:38:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # IFS=: 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # read -r var val 00:18:56.449 14:38:04 -- accel/accel.sh@20 -- # val= 00:18:56.449 14:38:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # IFS=: 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # read -r var val 00:18:56.449 14:38:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:56.449 14:38:04 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:56.449 14:38:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:56.449 00:18:56.449 real 0m2.956s 00:18:56.449 user 0m2.641s 00:18:56.449 sys 0m0.206s 00:18:56.449 14:38:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:56.449 14:38:04 -- common/autotest_common.sh@10 -- # set +x 00:18:56.449 ************************************ 00:18:56.449 END TEST accel_decmop_full 00:18:56.449 ************************************ 00:18:56.449 14:38:04 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:56.449 14:38:04 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:18:56.449 14:38:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.449 14:38:04 -- common/autotest_common.sh@10 -- # set +x 00:18:56.449 ************************************ 00:18:56.449 START TEST accel_decomp_mcore 00:18:56.449 ************************************ 00:18:56.449 14:38:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:56.449 14:38:04 -- accel/accel.sh@16 -- # local accel_opc 00:18:56.449 14:38:04 -- accel/accel.sh@17 -- # local accel_module 00:18:56.449 14:38:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # IFS=: 00:18:56.449 14:38:04 -- accel/accel.sh@19 -- # read -r var val 00:18:56.449 14:38:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:56.449 14:38:04 -- accel/accel.sh@12 -- # build_accel_config 00:18:56.449 14:38:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:56.449 14:38:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:56.449 14:38:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:56.449 14:38:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:56.449 14:38:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:56.449 14:38:04 -- accel/accel.sh@40 -- # local IFS=, 00:18:56.449 14:38:04 -- accel/accel.sh@41 -- # jq -r . 00:18:56.449 [2024-04-17 14:38:04.658699] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:56.449 [2024-04-17 14:38:04.659051] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65802 ] 00:18:56.449 [2024-04-17 14:38:04.827854] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.712 [2024-04-17 14:38:05.091549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.712 [2024-04-17 14:38:05.091626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.712 [2024-04-17 14:38:05.091641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.713 [2024-04-17 14:38:05.091654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val= 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val= 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val= 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val=0xf 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val= 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val= 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val=decompress 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val= 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val=software 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@22 -- # accel_module=software 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val=32 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val=32 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val=1 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val=Yes 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val= 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:56.972 14:38:05 -- accel/accel.sh@20 -- # val= 00:18:56.972 14:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # IFS=: 00:18:56.972 14:38:05 -- accel/accel.sh@19 -- # read -r var val 00:18:59.534 14:38:07 -- accel/accel.sh@20 -- # val= 00:18:59.534 14:38:07 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # IFS=: 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # read -r var val 00:18:59.534 14:38:07 -- accel/accel.sh@20 -- # val= 00:18:59.534 14:38:07 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # IFS=: 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # read -r var val 00:18:59.534 14:38:07 -- accel/accel.sh@20 -- # val= 00:18:59.534 14:38:07 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # IFS=: 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # read -r var val 00:18:59.534 14:38:07 -- accel/accel.sh@20 -- # val= 00:18:59.534 14:38:07 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # IFS=: 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # read -r var val 00:18:59.534 14:38:07 -- accel/accel.sh@20 -- # val= 00:18:59.534 14:38:07 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # IFS=: 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # read -r var val 00:18:59.534 14:38:07 -- accel/accel.sh@20 -- # val= 00:18:59.534 14:38:07 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # IFS=: 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # read -r var val 00:18:59.534 14:38:07 -- accel/accel.sh@20 -- # val= 00:18:59.534 14:38:07 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # IFS=: 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # read -r var val 00:18:59.534 14:38:07 -- accel/accel.sh@20 -- # val= 00:18:59.534 14:38:07 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # IFS=: 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # read -r var val 00:18:59.534 14:38:07 -- accel/accel.sh@20 -- # val= 00:18:59.534 14:38:07 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # IFS=: 00:18:59.534 14:38:07 -- accel/accel.sh@19 -- # read -r var val 00:18:59.535 14:38:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:59.535 14:38:07 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:59.535 14:38:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:59.535 00:18:59.535 real 0m2.959s 00:18:59.535 user 0m8.475s 00:18:59.535 sys 0m0.229s 00:18:59.535 14:38:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:59.535 14:38:07 -- common/autotest_common.sh@10 -- # set +x 00:18:59.535 ************************************ 00:18:59.535 END TEST accel_decomp_mcore 00:18:59.535 ************************************ 00:18:59.535 14:38:07 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:59.535 14:38:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:18:59.535 14:38:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:59.535 14:38:07 -- common/autotest_common.sh@10 -- # set +x 00:18:59.535 ************************************ 00:18:59.535 START TEST accel_decomp_full_mcore 00:18:59.535 ************************************ 00:18:59.535 14:38:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:59.535 14:38:07 -- accel/accel.sh@16 -- # local accel_opc 00:18:59.535 14:38:07 -- accel/accel.sh@17 -- # local accel_module 00:18:59.535 14:38:07 -- accel/accel.sh@19 -- # IFS=: 00:18:59.535 14:38:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:59.535 14:38:07 -- accel/accel.sh@19 -- # read -r var val 00:18:59.535 14:38:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:59.535 14:38:07 -- accel/accel.sh@12 -- # build_accel_config 00:18:59.535 14:38:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:59.535 14:38:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:59.535 14:38:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:59.535 14:38:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:59.535 14:38:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:59.535 14:38:07 -- accel/accel.sh@40 -- # local IFS=, 00:18:59.535 14:38:07 -- accel/accel.sh@41 -- # jq -r . 00:18:59.535 [2024-04-17 14:38:07.745574] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:18:59.535 [2024-04-17 14:38:07.745969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65861 ] 00:18:59.535 [2024-04-17 14:38:07.921842] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.794 [2024-04-17 14:38:08.231655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.794 [2024-04-17 14:38:08.231796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.794 [2024-04-17 14:38:08.231892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.794 [2024-04-17 14:38:08.231910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val= 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val= 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val= 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val=0xf 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val= 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val= 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val=decompress 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@23 -- # accel_opc=decompress 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val='111250 bytes' 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val= 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val=software 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@22 -- # accel_module=software 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val=32 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val=32 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val=1 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val=Yes 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val= 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:00.052 14:38:08 -- accel/accel.sh@20 -- # val= 00:19:00.052 14:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # IFS=: 00:19:00.052 14:38:08 -- accel/accel.sh@19 -- # read -r var val 00:19:02.582 14:38:10 -- accel/accel.sh@20 -- # val= 00:19:02.582 14:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # IFS=: 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # read -r var val 00:19:02.582 14:38:10 -- accel/accel.sh@20 -- # val= 00:19:02.582 14:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # IFS=: 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # read -r var val 00:19:02.582 14:38:10 -- accel/accel.sh@20 -- # val= 00:19:02.582 14:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # IFS=: 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # read -r var val 00:19:02.582 14:38:10 -- accel/accel.sh@20 -- # val= 00:19:02.582 14:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # IFS=: 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # read -r var val 00:19:02.582 14:38:10 -- accel/accel.sh@20 -- # val= 00:19:02.582 14:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # IFS=: 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # read -r var val 00:19:02.582 14:38:10 -- accel/accel.sh@20 -- # val= 00:19:02.582 14:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # IFS=: 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # read -r var val 00:19:02.582 14:38:10 -- accel/accel.sh@20 -- # val= 00:19:02.582 14:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # IFS=: 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # read -r var val 00:19:02.582 14:38:10 -- accel/accel.sh@20 -- # val= 00:19:02.582 14:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # IFS=: 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # read -r var val 00:19:02.582 14:38:10 -- accel/accel.sh@20 -- # val= 00:19:02.582 14:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # IFS=: 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # read -r var val 00:19:02.582 14:38:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:02.582 14:38:10 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:19:02.582 14:38:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:02.582 00:19:02.582 real 0m3.073s 00:19:02.582 user 0m8.692s 00:19:02.582 sys 0m0.239s 00:19:02.582 14:38:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:02.582 14:38:10 -- common/autotest_common.sh@10 -- # set +x 00:19:02.582 ************************************ 00:19:02.582 END TEST accel_decomp_full_mcore 00:19:02.582 ************************************ 00:19:02.582 14:38:10 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:19:02.582 14:38:10 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:19:02.582 14:38:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:02.582 14:38:10 -- common/autotest_common.sh@10 -- # set +x 00:19:02.582 ************************************ 00:19:02.582 START TEST accel_decomp_mthread 00:19:02.582 ************************************ 00:19:02.582 14:38:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:19:02.582 14:38:10 -- accel/accel.sh@16 -- # local accel_opc 00:19:02.582 14:38:10 -- accel/accel.sh@17 -- # local accel_module 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # IFS=: 00:19:02.582 14:38:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:19:02.582 14:38:10 -- accel/accel.sh@19 -- # read -r var val 00:19:02.582 14:38:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:19:02.582 14:38:10 -- accel/accel.sh@12 -- # build_accel_config 00:19:02.582 14:38:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:02.582 14:38:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:02.582 14:38:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:02.582 14:38:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:02.582 14:38:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:02.582 14:38:10 -- accel/accel.sh@40 -- # local IFS=, 00:19:02.582 14:38:10 -- accel/accel.sh@41 -- # jq -r . 00:19:02.582 [2024-04-17 14:38:10.961794] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:02.582 [2024-04-17 14:38:10.962246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65920 ] 00:19:02.582 [2024-04-17 14:38:11.149427] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.841 [2024-04-17 14:38:11.417643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val= 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val= 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val= 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val=0x1 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val= 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val= 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val=decompress 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@23 -- # accel_opc=decompress 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val= 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val=software 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@22 -- # accel_module=software 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val=32 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val=32 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val=2 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val=Yes 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val= 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:03.409 14:38:11 -- accel/accel.sh@20 -- # val= 00:19:03.409 14:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # IFS=: 00:19:03.409 14:38:11 -- accel/accel.sh@19 -- # read -r var val 00:19:05.306 14:38:13 -- accel/accel.sh@20 -- # val= 00:19:05.306 14:38:13 -- accel/accel.sh@21 -- # case "$var" in 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # IFS=: 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # read -r var val 00:19:05.306 14:38:13 -- accel/accel.sh@20 -- # val= 00:19:05.306 14:38:13 -- accel/accel.sh@21 -- # case "$var" in 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # IFS=: 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # read -r var val 00:19:05.306 14:38:13 -- accel/accel.sh@20 -- # val= 00:19:05.306 14:38:13 -- accel/accel.sh@21 -- # case "$var" in 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # IFS=: 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # read -r var val 00:19:05.306 14:38:13 -- accel/accel.sh@20 -- # val= 00:19:05.306 14:38:13 -- accel/accel.sh@21 -- # case "$var" in 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # IFS=: 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # read -r var val 00:19:05.306 14:38:13 -- accel/accel.sh@20 -- # val= 00:19:05.306 14:38:13 -- accel/accel.sh@21 -- # case "$var" in 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # IFS=: 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # read -r var val 00:19:05.306 14:38:13 -- accel/accel.sh@20 -- # val= 00:19:05.306 14:38:13 -- accel/accel.sh@21 -- # case "$var" in 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # IFS=: 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # read -r var val 00:19:05.306 14:38:13 -- accel/accel.sh@20 -- # val= 00:19:05.306 14:38:13 -- accel/accel.sh@21 -- # case "$var" in 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # IFS=: 00:19:05.306 14:38:13 -- accel/accel.sh@19 -- # read -r var val 00:19:05.306 14:38:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:05.306 14:38:13 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:19:05.306 14:38:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:05.306 00:19:05.306 real 0m3.002s 00:19:05.306 user 0m2.656s 00:19:05.306 sys 0m0.237s 00:19:05.306 14:38:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:05.306 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:19:05.306 ************************************ 00:19:05.306 END TEST accel_decomp_mthread 00:19:05.306 ************************************ 00:19:05.565 14:38:13 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:19:05.565 14:38:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:19:05.565 14:38:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:05.565 14:38:13 -- common/autotest_common.sh@10 -- # set +x 00:19:05.565 ************************************ 00:19:05.565 START TEST accel_deomp_full_mthread 00:19:05.565 ************************************ 00:19:05.565 14:38:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:19:05.565 14:38:14 -- accel/accel.sh@16 -- # local accel_opc 00:19:05.565 14:38:14 -- accel/accel.sh@17 -- # local accel_module 00:19:05.565 14:38:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:19:05.565 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:05.565 14:38:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:19:05.565 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:05.565 14:38:14 -- accel/accel.sh@12 -- # build_accel_config 00:19:05.565 14:38:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:05.565 14:38:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:05.565 14:38:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:05.565 14:38:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:05.565 14:38:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:05.565 14:38:14 -- accel/accel.sh@40 -- # local IFS=, 00:19:05.565 14:38:14 -- accel/accel.sh@41 -- # jq -r . 00:19:05.565 [2024-04-17 14:38:14.065011] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:05.565 [2024-04-17 14:38:14.065340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65976 ] 00:19:05.825 [2024-04-17 14:38:14.233291] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.083 [2024-04-17 14:38:14.504039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.341 14:38:14 -- accel/accel.sh@20 -- # val= 00:19:06.341 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.341 14:38:14 -- accel/accel.sh@20 -- # val= 00:19:06.341 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.341 14:38:14 -- accel/accel.sh@20 -- # val= 00:19:06.341 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.341 14:38:14 -- accel/accel.sh@20 -- # val=0x1 00:19:06.341 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.341 14:38:14 -- accel/accel.sh@20 -- # val= 00:19:06.341 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.341 14:38:14 -- accel/accel.sh@20 -- # val= 00:19:06.341 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.341 14:38:14 -- accel/accel.sh@20 -- # val=decompress 00:19:06.341 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.341 14:38:14 -- accel/accel.sh@23 -- # accel_opc=decompress 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.341 14:38:14 -- accel/accel.sh@20 -- # val='111250 bytes' 00:19:06.341 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.341 14:38:14 -- accel/accel.sh@20 -- # val= 00:19:06.341 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.341 14:38:14 -- accel/accel.sh@20 -- # val=software 00:19:06.341 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.341 14:38:14 -- accel/accel.sh@22 -- # accel_module=software 00:19:06.341 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.342 14:38:14 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:19:06.342 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.342 14:38:14 -- accel/accel.sh@20 -- # val=32 00:19:06.342 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.342 14:38:14 -- accel/accel.sh@20 -- # val=32 00:19:06.342 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.342 14:38:14 -- accel/accel.sh@20 -- # val=2 00:19:06.342 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.342 14:38:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:19:06.342 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.342 14:38:14 -- accel/accel.sh@20 -- # val=Yes 00:19:06.342 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.342 14:38:14 -- accel/accel.sh@20 -- # val= 00:19:06.342 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:06.342 14:38:14 -- accel/accel.sh@20 -- # val= 00:19:06.342 14:38:14 -- accel/accel.sh@21 -- # case "$var" in 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # IFS=: 00:19:06.342 14:38:14 -- accel/accel.sh@19 -- # read -r var val 00:19:08.875 14:38:16 -- accel/accel.sh@20 -- # val= 00:19:08.875 14:38:16 -- accel/accel.sh@21 -- # case "$var" in 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # IFS=: 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # read -r var val 00:19:08.875 14:38:16 -- accel/accel.sh@20 -- # val= 00:19:08.875 14:38:16 -- accel/accel.sh@21 -- # case "$var" in 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # IFS=: 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # read -r var val 00:19:08.875 14:38:16 -- accel/accel.sh@20 -- # val= 00:19:08.875 14:38:16 -- accel/accel.sh@21 -- # case "$var" in 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # IFS=: 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # read -r var val 00:19:08.875 14:38:16 -- accel/accel.sh@20 -- # val= 00:19:08.875 14:38:16 -- accel/accel.sh@21 -- # case "$var" in 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # IFS=: 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # read -r var val 00:19:08.875 14:38:16 -- accel/accel.sh@20 -- # val= 00:19:08.875 14:38:16 -- accel/accel.sh@21 -- # case "$var" in 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # IFS=: 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # read -r var val 00:19:08.875 14:38:16 -- accel/accel.sh@20 -- # val= 00:19:08.875 14:38:16 -- accel/accel.sh@21 -- # case "$var" in 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # IFS=: 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # read -r var val 00:19:08.875 14:38:16 -- accel/accel.sh@20 -- # val= 00:19:08.875 14:38:16 -- accel/accel.sh@21 -- # case "$var" in 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # IFS=: 00:19:08.875 14:38:16 -- accel/accel.sh@19 -- # read -r var val 00:19:08.875 14:38:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:08.875 14:38:16 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:19:08.875 14:38:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:08.875 00:19:08.875 real 0m2.939s 00:19:08.875 user 0m2.654s 00:19:08.875 sys 0m0.180s 00:19:08.875 14:38:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:08.875 14:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:08.875 ************************************ 00:19:08.875 END TEST accel_deomp_full_mthread 00:19:08.875 ************************************ 00:19:08.875 14:38:17 -- accel/accel.sh@124 -- # [[ n == y ]] 00:19:08.875 14:38:17 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:19:08.875 14:38:17 -- accel/accel.sh@137 -- # build_accel_config 00:19:08.875 14:38:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:08.875 14:38:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:08.875 14:38:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:08.875 14:38:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:08.875 14:38:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:08.875 14:38:17 -- common/autotest_common.sh@10 -- # set +x 00:19:08.875 14:38:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:08.875 14:38:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:08.875 14:38:17 -- accel/accel.sh@40 -- # local IFS=, 00:19:08.875 14:38:17 -- accel/accel.sh@41 -- # jq -r . 00:19:08.875 ************************************ 00:19:08.875 START TEST accel_dif_functional_tests 00:19:08.875 ************************************ 00:19:08.875 14:38:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:19:08.875 [2024-04-17 14:38:17.176526] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:08.875 [2024-04-17 14:38:17.176680] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66034 ] 00:19:08.875 [2024-04-17 14:38:17.345734] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:09.156 [2024-04-17 14:38:17.601070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.156 [2024-04-17 14:38:17.601137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.156 [2024-04-17 14:38:17.601155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.724 00:19:09.724 00:19:09.724 CUnit - A unit testing framework for C - Version 2.1-3 00:19:09.724 http://cunit.sourceforge.net/ 00:19:09.724 00:19:09.724 00:19:09.724 Suite: accel_dif 00:19:09.724 Test: verify: DIF generated, GUARD check ...passed 00:19:09.724 Test: verify: DIF generated, APPTAG check ...passed 00:19:09.724 Test: verify: DIF generated, REFTAG check ...passed 00:19:09.724 Test: verify: DIF not generated, GUARD check ...passed 00:19:09.724 Test: verify: DIF not generated, APPTAG check ...[2024-04-17 14:38:18.021002] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:19:09.724 [2024-04-17 14:38:18.021090] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:19:09.724 passed 00:19:09.724 Test: verify: DIF not generated, REFTAG check ...passed 00:19:09.724 Test: verify: APPTAG correct, APPTAG check ...passed 00:19:09.724 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:19:09.724 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:19:09.724 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-04-17 14:38:18.021169] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:19:09.724 [2024-04-17 14:38:18.021220] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:19:09.724 [2024-04-17 14:38:18.021258] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:19:09.724 [2024-04-17 14:38:18.021289] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:19:09.724 [2024-04-17 14:38:18.021366] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:19:09.724 passed 00:19:09.724 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:19:09.724 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:19:09.724 Test: generate copy: DIF generated, GUARD check ...passed 00:19:09.724 Test: generate copy: DIF generated, APTTAG check ...passed 00:19:09.724 Test: generate copy: DIF generated, REFTAG check ...passed 00:19:09.724 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:19:09.724 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:19:09.724 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:19:09.724 Test: generate copy: iovecs-len validate ...[2024-04-17 14:38:18.021595] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:19:09.724 [2024-04-17 14:38:18.021974] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:19:09.724 passed 00:19:09.724 Test: generate copy: buffer alignment validate ...passed 00:19:09.724 00:19:09.724 Run Summary: Type Total Ran Passed Failed Inactive 00:19:09.724 suites 1 1 n/a 0 0 00:19:09.724 tests 20 20 20 0 0 00:19:09.724 asserts 204 204 204 0 n/a 00:19:09.724 00:19:09.724 Elapsed time = 0.003 seconds 00:19:11.102 00:19:11.102 real 0m2.336s 00:19:11.102 user 0m4.589s 00:19:11.102 sys 0m0.251s 00:19:11.102 14:38:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.102 14:38:19 -- common/autotest_common.sh@10 -- # set +x 00:19:11.102 ************************************ 00:19:11.102 END TEST accel_dif_functional_tests 00:19:11.102 ************************************ 00:19:11.102 ************************************ 00:19:11.102 END TEST accel 00:19:11.102 ************************************ 00:19:11.102 00:19:11.102 real 1m12.571s 00:19:11.102 user 1m17.404s 00:19:11.102 sys 0m7.341s 00:19:11.102 14:38:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.102 14:38:19 -- common/autotest_common.sh@10 -- # set +x 00:19:11.102 14:38:19 -- spdk/autotest.sh@179 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:19:11.102 14:38:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:11.102 14:38:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.102 14:38:19 -- common/autotest_common.sh@10 -- # set +x 00:19:11.102 ************************************ 00:19:11.102 START TEST accel_rpc 00:19:11.102 ************************************ 00:19:11.102 14:38:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:19:11.361 * Looking for test storage... 00:19:11.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:19:11.361 14:38:19 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:19:11.361 14:38:19 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66127 00:19:11.361 14:38:19 -- accel/accel_rpc.sh@15 -- # waitforlisten 66127 00:19:11.361 14:38:19 -- common/autotest_common.sh@817 -- # '[' -z 66127 ']' 00:19:11.361 14:38:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.361 14:38:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:11.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.361 14:38:19 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:11.361 14:38:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.361 14:38:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:11.361 14:38:19 -- common/autotest_common.sh@10 -- # set +x 00:19:11.361 [2024-04-17 14:38:19.818211] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:11.362 [2024-04-17 14:38:19.818826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66127 ] 00:19:11.619 [2024-04-17 14:38:19.991457] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.877 [2024-04-17 14:38:20.335428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.442 14:38:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:12.442 14:38:20 -- common/autotest_common.sh@850 -- # return 0 00:19:12.442 14:38:20 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:19:12.442 14:38:20 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:19:12.442 14:38:20 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:19:12.442 14:38:20 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:19:12.442 14:38:20 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:19:12.442 14:38:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:12.442 14:38:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:12.442 14:38:20 -- common/autotest_common.sh@10 -- # set +x 00:19:12.442 ************************************ 00:19:12.442 START TEST accel_assign_opcode 00:19:12.442 ************************************ 00:19:12.442 14:38:20 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:19:12.442 14:38:20 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:19:12.442 14:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.442 14:38:20 -- common/autotest_common.sh@10 -- # set +x 00:19:12.442 [2024-04-17 14:38:20.864513] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:19:12.442 14:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.442 14:38:20 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:19:12.442 14:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.442 14:38:20 -- common/autotest_common.sh@10 -- # set +x 00:19:12.442 [2024-04-17 14:38:20.872421] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:19:12.442 14:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.442 14:38:20 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:19:12.442 14:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.442 14:38:20 -- common/autotest_common.sh@10 -- # set +x 00:19:13.377 14:38:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.377 14:38:21 -- accel/accel_rpc.sh@42 -- # grep software 00:19:13.377 14:38:21 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:19:13.377 14:38:21 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:19:13.377 14:38:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.377 14:38:21 -- common/autotest_common.sh@10 -- # set +x 00:19:13.377 14:38:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.377 software 00:19:13.377 00:19:13.377 real 0m1.088s 00:19:13.377 user 0m0.050s 00:19:13.377 sys 0m0.011s 00:19:13.377 14:38:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:13.377 14:38:21 -- common/autotest_common.sh@10 -- # set +x 00:19:13.377 ************************************ 00:19:13.377 END TEST accel_assign_opcode 00:19:13.377 ************************************ 00:19:13.635 14:38:21 -- accel/accel_rpc.sh@55 -- # killprocess 66127 00:19:13.635 14:38:21 -- common/autotest_common.sh@936 -- # '[' -z 66127 ']' 00:19:13.635 14:38:21 -- common/autotest_common.sh@940 -- # kill -0 66127 00:19:13.635 14:38:21 -- common/autotest_common.sh@941 -- # uname 00:19:13.635 14:38:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:13.635 14:38:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66127 00:19:13.635 14:38:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:13.635 14:38:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:13.635 killing process with pid 66127 00:19:13.635 14:38:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66127' 00:19:13.635 14:38:22 -- common/autotest_common.sh@955 -- # kill 66127 00:19:13.635 14:38:22 -- common/autotest_common.sh@960 -- # wait 66127 00:19:16.167 00:19:16.167 real 0m5.124s 00:19:16.167 user 0m5.117s 00:19:16.167 sys 0m0.623s 00:19:16.167 14:38:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:16.167 ************************************ 00:19:16.167 END TEST accel_rpc 00:19:16.167 ************************************ 00:19:16.167 14:38:24 -- common/autotest_common.sh@10 -- # set +x 00:19:16.494 14:38:24 -- spdk/autotest.sh@180 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:19:16.494 14:38:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:16.494 14:38:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:16.494 14:38:24 -- common/autotest_common.sh@10 -- # set +x 00:19:16.494 ************************************ 00:19:16.494 START TEST app_cmdline 00:19:16.494 ************************************ 00:19:16.494 14:38:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:19:16.494 * Looking for test storage... 00:19:16.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:19:16.494 14:38:24 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:19:16.494 14:38:24 -- app/cmdline.sh@17 -- # spdk_tgt_pid=66264 00:19:16.494 14:38:24 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:19:16.494 14:38:24 -- app/cmdline.sh@18 -- # waitforlisten 66264 00:19:16.494 14:38:24 -- common/autotest_common.sh@817 -- # '[' -z 66264 ']' 00:19:16.494 14:38:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.494 14:38:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:16.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.494 14:38:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.494 14:38:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:16.494 14:38:24 -- common/autotest_common.sh@10 -- # set +x 00:19:16.494 [2024-04-17 14:38:25.060702] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:16.494 [2024-04-17 14:38:25.060828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66264 ] 00:19:16.752 [2024-04-17 14:38:25.240305] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.010 [2024-04-17 14:38:25.508767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.389 14:38:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:18.389 14:38:26 -- common/autotest_common.sh@850 -- # return 0 00:19:18.389 14:38:26 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:19:18.389 { 00:19:18.389 "version": "SPDK v24.05-pre git sha1 0fa934e8f", 00:19:18.389 "fields": { 00:19:18.389 "major": 24, 00:19:18.389 "minor": 5, 00:19:18.389 "patch": 0, 00:19:18.389 "suffix": "-pre", 00:19:18.389 "commit": "0fa934e8f" 00:19:18.389 } 00:19:18.389 } 00:19:18.389 14:38:26 -- app/cmdline.sh@22 -- # expected_methods=() 00:19:18.389 14:38:26 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:19:18.389 14:38:26 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:19:18.389 14:38:26 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:19:18.389 14:38:26 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:19:18.389 14:38:26 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:19:18.389 14:38:26 -- app/cmdline.sh@26 -- # sort 00:19:18.389 14:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.389 14:38:26 -- common/autotest_common.sh@10 -- # set +x 00:19:18.389 14:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.389 14:38:26 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:19:18.389 14:38:26 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:19:18.389 14:38:26 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:18.389 14:38:26 -- common/autotest_common.sh@638 -- # local es=0 00:19:18.389 14:38:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:18.389 14:38:26 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.389 14:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:18.389 14:38:26 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.389 14:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:18.389 14:38:26 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.389 14:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:18.389 14:38:26 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.389 14:38:26 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:18.389 14:38:26 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:18.648 request: 00:19:18.648 { 00:19:18.648 "method": "env_dpdk_get_mem_stats", 00:19:18.648 "req_id": 1 00:19:18.648 } 00:19:18.648 Got JSON-RPC error response 00:19:18.648 response: 00:19:18.648 { 00:19:18.648 "code": -32601, 00:19:18.648 "message": "Method not found" 00:19:18.648 } 00:19:18.648 14:38:27 -- common/autotest_common.sh@641 -- # es=1 00:19:18.648 14:38:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:18.648 14:38:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:18.648 14:38:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:18.648 14:38:27 -- app/cmdline.sh@1 -- # killprocess 66264 00:19:18.648 14:38:27 -- common/autotest_common.sh@936 -- # '[' -z 66264 ']' 00:19:18.648 14:38:27 -- common/autotest_common.sh@940 -- # kill -0 66264 00:19:18.648 14:38:27 -- common/autotest_common.sh@941 -- # uname 00:19:18.648 14:38:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:18.648 14:38:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66264 00:19:18.648 14:38:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:18.648 14:38:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:18.648 14:38:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66264' 00:19:18.648 killing process with pid 66264 00:19:18.648 14:38:27 -- common/autotest_common.sh@955 -- # kill 66264 00:19:18.648 14:38:27 -- common/autotest_common.sh@960 -- # wait 66264 00:19:21.936 00:19:21.936 real 0m4.968s 00:19:21.936 user 0m5.353s 00:19:21.936 sys 0m0.627s 00:19:21.936 14:38:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:21.936 14:38:29 -- common/autotest_common.sh@10 -- # set +x 00:19:21.936 ************************************ 00:19:21.936 END TEST app_cmdline 00:19:21.936 ************************************ 00:19:21.936 14:38:29 -- spdk/autotest.sh@181 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:19:21.936 14:38:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:21.936 14:38:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:21.936 14:38:29 -- common/autotest_common.sh@10 -- # set +x 00:19:21.936 ************************************ 00:19:21.936 START TEST version 00:19:21.936 ************************************ 00:19:21.936 14:38:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:19:21.936 * Looking for test storage... 00:19:21.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:19:21.936 14:38:30 -- app/version.sh@17 -- # get_header_version major 00:19:21.936 14:38:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:21.936 14:38:30 -- app/version.sh@14 -- # cut -f2 00:19:21.936 14:38:30 -- app/version.sh@14 -- # tr -d '"' 00:19:21.936 14:38:30 -- app/version.sh@17 -- # major=24 00:19:21.936 14:38:30 -- app/version.sh@18 -- # get_header_version minor 00:19:21.936 14:38:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:21.936 14:38:30 -- app/version.sh@14 -- # cut -f2 00:19:21.936 14:38:30 -- app/version.sh@14 -- # tr -d '"' 00:19:21.936 14:38:30 -- app/version.sh@18 -- # minor=5 00:19:21.936 14:38:30 -- app/version.sh@19 -- # get_header_version patch 00:19:21.936 14:38:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:21.936 14:38:30 -- app/version.sh@14 -- # cut -f2 00:19:21.936 14:38:30 -- app/version.sh@14 -- # tr -d '"' 00:19:21.936 14:38:30 -- app/version.sh@19 -- # patch=0 00:19:21.936 14:38:30 -- app/version.sh@20 -- # get_header_version suffix 00:19:21.936 14:38:30 -- app/version.sh@14 -- # cut -f2 00:19:21.936 14:38:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:21.936 14:38:30 -- app/version.sh@14 -- # tr -d '"' 00:19:21.936 14:38:30 -- app/version.sh@20 -- # suffix=-pre 00:19:21.936 14:38:30 -- app/version.sh@22 -- # version=24.5 00:19:21.936 14:38:30 -- app/version.sh@25 -- # (( patch != 0 )) 00:19:21.936 14:38:30 -- app/version.sh@28 -- # version=24.5rc0 00:19:21.936 14:38:30 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:21.936 14:38:30 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:19:21.936 14:38:30 -- app/version.sh@30 -- # py_version=24.5rc0 00:19:21.936 14:38:30 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:19:21.936 00:19:21.936 real 0m0.171s 00:19:21.936 user 0m0.089s 00:19:21.936 sys 0m0.118s 00:19:21.936 14:38:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:21.936 14:38:30 -- common/autotest_common.sh@10 -- # set +x 00:19:21.936 ************************************ 00:19:21.936 END TEST version 00:19:21.936 ************************************ 00:19:21.936 14:38:30 -- spdk/autotest.sh@183 -- # '[' 0 -eq 1 ']' 00:19:21.936 14:38:30 -- spdk/autotest.sh@193 -- # uname -s 00:19:21.936 14:38:30 -- spdk/autotest.sh@193 -- # [[ Linux == Linux ]] 00:19:21.936 14:38:30 -- spdk/autotest.sh@194 -- # [[ 0 -eq 1 ]] 00:19:21.936 14:38:30 -- spdk/autotest.sh@194 -- # [[ 0 -eq 1 ]] 00:19:21.936 14:38:30 -- spdk/autotest.sh@206 -- # '[' 1 -eq 1 ']' 00:19:21.936 14:38:30 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:19:21.936 14:38:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:21.936 14:38:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:21.936 14:38:30 -- common/autotest_common.sh@10 -- # set +x 00:19:21.936 ************************************ 00:19:21.936 START TEST blockdev_nvme 00:19:21.936 ************************************ 00:19:21.936 14:38:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:19:21.936 * Looking for test storage... 00:19:21.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:21.936 14:38:30 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:21.936 14:38:30 -- bdev/nbd_common.sh@6 -- # set -e 00:19:21.936 14:38:30 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:21.936 14:38:30 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:21.936 14:38:30 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:21.936 14:38:30 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:21.936 14:38:30 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:21.936 14:38:30 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:21.936 14:38:30 -- bdev/blockdev.sh@20 -- # : 00:19:21.936 14:38:30 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:19:21.936 14:38:30 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:19:21.936 14:38:30 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:19:21.936 14:38:30 -- bdev/blockdev.sh@674 -- # uname -s 00:19:21.936 14:38:30 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:19:21.936 14:38:30 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:19:21.936 14:38:30 -- bdev/blockdev.sh@682 -- # test_type=nvme 00:19:21.936 14:38:30 -- bdev/blockdev.sh@683 -- # crypto_device= 00:19:21.936 14:38:30 -- bdev/blockdev.sh@684 -- # dek= 00:19:21.936 14:38:30 -- bdev/blockdev.sh@685 -- # env_ctx= 00:19:21.936 14:38:30 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:19:21.936 14:38:30 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:19:21.936 14:38:30 -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:19:21.936 14:38:30 -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:19:21.936 14:38:30 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:19:21.936 14:38:30 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66453 00:19:21.936 14:38:30 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:21.936 14:38:30 -- bdev/blockdev.sh@49 -- # waitforlisten 66453 00:19:21.936 14:38:30 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:21.936 14:38:30 -- common/autotest_common.sh@817 -- # '[' -z 66453 ']' 00:19:21.936 14:38:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.936 14:38:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:21.936 14:38:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.936 14:38:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:21.936 14:38:30 -- common/autotest_common.sh@10 -- # set +x 00:19:21.936 [2024-04-17 14:38:30.521425] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:21.937 [2024-04-17 14:38:30.521606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66453 ] 00:19:22.249 [2024-04-17 14:38:30.707755] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.522 [2024-04-17 14:38:31.049194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.897 14:38:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:23.897 14:38:32 -- common/autotest_common.sh@850 -- # return 0 00:19:23.897 14:38:32 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:19:23.897 14:38:32 -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:19:23.897 14:38:32 -- bdev/blockdev.sh@81 -- # local json 00:19:23.897 14:38:32 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:19:23.897 14:38:32 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:23.897 14:38:32 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:19:23.897 14:38:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.897 14:38:32 -- common/autotest_common.sh@10 -- # set +x 00:19:23.897 14:38:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.897 14:38:32 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:19:23.897 14:38:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.897 14:38:32 -- common/autotest_common.sh@10 -- # set +x 00:19:23.897 14:38:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.897 14:38:32 -- bdev/blockdev.sh@740 -- # cat 00:19:23.897 14:38:32 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:19:23.898 14:38:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.898 14:38:32 -- common/autotest_common.sh@10 -- # set +x 00:19:23.898 14:38:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.898 14:38:32 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:19:23.898 14:38:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.898 14:38:32 -- common/autotest_common.sh@10 -- # set +x 00:19:24.156 14:38:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.156 14:38:32 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:24.156 14:38:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.156 14:38:32 -- common/autotest_common.sh@10 -- # set +x 00:19:24.156 14:38:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.156 14:38:32 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:19:24.156 14:38:32 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:19:24.156 14:38:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.156 14:38:32 -- common/autotest_common.sh@10 -- # set +x 00:19:24.156 14:38:32 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:19:24.156 14:38:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.156 14:38:32 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:19:24.156 14:38:32 -- bdev/blockdev.sh@749 -- # jq -r .name 00:19:24.157 14:38:32 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "18595bc2-0d49-45a2-ba49-c7c70f8aea79"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "18595bc2-0d49-45a2-ba49-c7c70f8aea79",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "738933b1-f646-4a1e-ab9e-14c0781c51e9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "738933b1-f646-4a1e-ab9e-14c0781c51e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "17bcd65b-fc28-4076-ab0d-6e0f1ebd187a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "17bcd65b-fc28-4076-ab0d-6e0f1ebd187a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fea5ddd5-7abe-4f8b-80c0-7f5750d52bc3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fea5ddd5-7abe-4f8b-80c0-7f5750d52bc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "11bec886-dc35-43d2-9cba-2526f929ca9c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "11bec886-dc35-43d2-9cba-2526f929ca9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "47da902a-00e7-40c1-b2dd-6e40e1ebe9af"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "47da902a-00e7-40c1-b2dd-6e40e1ebe9af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:19:24.157 14:38:32 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:19:24.157 14:38:32 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:19:24.157 14:38:32 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:19:24.157 14:38:32 -- bdev/blockdev.sh@754 -- # killprocess 66453 00:19:24.157 14:38:32 -- common/autotest_common.sh@936 -- # '[' -z 66453 ']' 00:19:24.157 14:38:32 -- common/autotest_common.sh@940 -- # kill -0 66453 00:19:24.157 14:38:32 -- common/autotest_common.sh@941 -- # uname 00:19:24.157 14:38:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:24.157 14:38:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66453 00:19:24.157 14:38:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:24.157 14:38:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:24.157 killing process with pid 66453 00:19:24.157 14:38:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66453' 00:19:24.157 14:38:32 -- common/autotest_common.sh@955 -- # kill 66453 00:19:24.157 14:38:32 -- common/autotest_common.sh@960 -- # wait 66453 00:19:27.444 14:38:35 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:27.444 14:38:35 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:27.444 14:38:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:19:27.444 14:38:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:27.444 14:38:35 -- common/autotest_common.sh@10 -- # set +x 00:19:27.444 ************************************ 00:19:27.444 START TEST bdev_hello_world 00:19:27.444 ************************************ 00:19:27.444 14:38:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:27.444 [2024-04-17 14:38:35.769057] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:27.444 [2024-04-17 14:38:35.769233] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66563 ] 00:19:27.444 [2024-04-17 14:38:35.952602] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.702 [2024-04-17 14:38:36.225822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.671 [2024-04-17 14:38:36.995240] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:28.671 [2024-04-17 14:38:36.995746] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:19:28.671 [2024-04-17 14:38:36.995865] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:28.671 [2024-04-17 14:38:36.999484] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:28.671 [2024-04-17 14:38:37.000361] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:28.671 [2024-04-17 14:38:37.000508] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:28.671 [2024-04-17 14:38:37.000770] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:28.671 00:19:28.671 [2024-04-17 14:38:37.000878] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:30.126 00:19:30.126 real 0m2.798s 00:19:30.126 user 0m2.385s 00:19:30.126 sys 0m0.296s 00:19:30.126 ************************************ 00:19:30.126 END TEST bdev_hello_world 00:19:30.126 ************************************ 00:19:30.126 14:38:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:30.126 14:38:38 -- common/autotest_common.sh@10 -- # set +x 00:19:30.126 14:38:38 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:19:30.126 14:38:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:30.126 14:38:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:30.126 14:38:38 -- common/autotest_common.sh@10 -- # set +x 00:19:30.126 ************************************ 00:19:30.126 START TEST bdev_bounds 00:19:30.126 ************************************ 00:19:30.126 14:38:38 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:19:30.126 14:38:38 -- bdev/blockdev.sh@290 -- # bdevio_pid=66620 00:19:30.126 14:38:38 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:30.126 Process bdevio pid: 66620 00:19:30.126 14:38:38 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66620' 00:19:30.126 14:38:38 -- bdev/blockdev.sh@293 -- # waitforlisten 66620 00:19:30.126 14:38:38 -- common/autotest_common.sh@817 -- # '[' -z 66620 ']' 00:19:30.126 14:38:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.126 14:38:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:30.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.126 14:38:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.126 14:38:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:30.126 14:38:38 -- common/autotest_common.sh@10 -- # set +x 00:19:30.126 14:38:38 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:30.389 [2024-04-17 14:38:38.717442] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:30.390 [2024-04-17 14:38:38.717609] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66620 ] 00:19:30.390 [2024-04-17 14:38:38.893184] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:30.648 [2024-04-17 14:38:39.193970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.648 [2024-04-17 14:38:39.194162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.648 [2024-04-17 14:38:39.194177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.633 14:38:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:31.633 14:38:40 -- common/autotest_common.sh@850 -- # return 0 00:19:31.633 14:38:40 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:31.633 I/O targets: 00:19:31.633 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:31.633 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:31.633 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:31.634 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:31.634 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:31.634 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:31.634 00:19:31.634 00:19:31.634 CUnit - A unit testing framework for C - Version 2.1-3 00:19:31.634 http://cunit.sourceforge.net/ 00:19:31.634 00:19:31.634 00:19:31.634 Suite: bdevio tests on: Nvme3n1 00:19:31.634 Test: blockdev write read block ...passed 00:19:31.634 Test: blockdev write zeroes read block ...passed 00:19:31.634 Test: blockdev write zeroes read no split ...passed 00:19:31.934 Test: blockdev write zeroes read split ...passed 00:19:31.934 Test: blockdev write zeroes read split partial ...passed 00:19:31.934 Test: blockdev reset ...[2024-04-17 14:38:40.292814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:19:31.934 [2024-04-17 14:38:40.297299] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:31.934 passed 00:19:31.934 Test: blockdev write read 8 blocks ...passed 00:19:31.934 Test: blockdev write read size > 128k ...passed 00:19:31.934 Test: blockdev write read invalid size ...passed 00:19:31.934 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:31.934 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:31.934 Test: blockdev write read max offset ...passed 00:19:31.934 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:31.934 Test: blockdev writev readv 8 blocks ...passed 00:19:31.934 Test: blockdev writev readv 30 x 1block ...passed 00:19:31.934 Test: blockdev writev readv block ...passed 00:19:31.934 Test: blockdev writev readv size > 128k ...passed 00:19:31.934 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:31.934 Test: blockdev comparev and writev ...[2024-04-17 14:38:40.312072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x20420e000 len:0x1000 00:19:31.934 [2024-04-17 14:38:40.312303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:31.934 passed 00:19:31.934 Test: blockdev nvme passthru rw ...passed 00:19:31.934 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:38:40.313904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:19:31.934 [2024-04-17 14:38:40.314217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:31.934 passed 00:19:31.934 Test: blockdev nvme admin passthru ...passed 00:19:31.934 Test: blockdev copy ...passed 00:19:31.934 Suite: bdevio tests on: Nvme2n3 00:19:31.934 Test: blockdev write read block ...passed 00:19:31.934 Test: blockdev write zeroes read block ...passed 00:19:31.934 Test: blockdev write zeroes read no split ...passed 00:19:31.934 Test: blockdev write zeroes read split ...passed 00:19:31.934 Test: blockdev write zeroes read split partial ...passed 00:19:31.934 Test: blockdev reset ...[2024-04-17 14:38:40.434465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:19:31.934 [2024-04-17 14:38:40.439105] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:31.934 passed 00:19:31.934 Test: blockdev write read 8 blocks ...passed 00:19:31.934 Test: blockdev write read size > 128k ...passed 00:19:31.934 Test: blockdev write read invalid size ...passed 00:19:31.934 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:31.934 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:31.934 Test: blockdev write read max offset ...passed 00:19:31.934 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:31.934 Test: blockdev writev readv 8 blocks ...passed 00:19:31.934 Test: blockdev writev readv 30 x 1block ...passed 00:19:31.934 Test: blockdev writev readv block ...passed 00:19:31.934 Test: blockdev writev readv size > 128k ...passed 00:19:31.934 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:31.934 Test: blockdev comparev and writev ...[2024-04-17 14:38:40.452188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x20420a000 len:0x1000 00:19:31.934 [2024-04-17 14:38:40.452457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:31.934 passed 00:19:31.934 Test: blockdev nvme passthru rw ...passed 00:19:31.934 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:38:40.453937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:19:31.934 [2024-04-17 14:38:40.454141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:19:31.934 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:19:31.934 passed 00:19:31.934 Test: blockdev copy ...passed 00:19:31.934 Suite: bdevio tests on: Nvme2n2 00:19:31.934 Test: blockdev write read block ...passed 00:19:31.934 Test: blockdev write zeroes read block ...passed 00:19:31.934 Test: blockdev write zeroes read no split ...passed 00:19:32.217 Test: blockdev write zeroes read split ...passed 00:19:32.217 Test: blockdev write zeroes read split partial ...passed 00:19:32.217 Test: blockdev reset ...[2024-04-17 14:38:40.566468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:19:32.217 [2024-04-17 14:38:40.571348] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:32.217 passed 00:19:32.217 Test: blockdev write read 8 blocks ...passed 00:19:32.217 Test: blockdev write read size > 128k ...passed 00:19:32.217 Test: blockdev write read invalid size ...passed 00:19:32.217 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:32.217 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:32.217 Test: blockdev write read max offset ...passed 00:19:32.217 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:32.217 Test: blockdev writev readv 8 blocks ...passed 00:19:32.217 Test: blockdev writev readv 30 x 1block ...passed 00:19:32.217 Test: blockdev writev readv block ...passed 00:19:32.217 Test: blockdev writev readv size > 128k ...passed 00:19:32.217 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:32.217 Test: blockdev comparev and writev ...[2024-04-17 14:38:40.583394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1e3e06000 len:0x1000 00:19:32.217 [2024-04-17 14:38:40.583611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:32.217 passed 00:19:32.217 Test: blockdev nvme passthru rw ...passed 00:19:32.217 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:38:40.584805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:19:32.217 [2024-04-17 14:38:40.584974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:19:32.217 00:19:32.217 Test: blockdev nvme admin passthru ...passed 00:19:32.217 Test: blockdev copy ...passed 00:19:32.217 Suite: bdevio tests on: Nvme2n1 00:19:32.217 Test: blockdev write read block ...passed 00:19:32.217 Test: blockdev write zeroes read block ...passed 00:19:32.217 Test: blockdev write zeroes read no split ...passed 00:19:32.217 Test: blockdev write zeroes read split ...passed 00:19:32.217 Test: blockdev write zeroes read split partial ...passed 00:19:32.217 Test: blockdev reset ...[2024-04-17 14:38:40.667829] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:19:32.217 [2024-04-17 14:38:40.672153] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:32.217 passed 00:19:32.217 Test: blockdev write read 8 blocks ...passed 00:19:32.217 Test: blockdev write read size > 128k ...passed 00:19:32.217 Test: blockdev write read invalid size ...passed 00:19:32.217 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:32.217 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:32.217 Test: blockdev write read max offset ...passed 00:19:32.217 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:32.217 Test: blockdev writev readv 8 blocks ...passed 00:19:32.217 Test: blockdev writev readv 30 x 1block ...passed 00:19:32.217 Test: blockdev writev readv block ...passed 00:19:32.217 Test: blockdev writev readv size > 128k ...passed 00:19:32.217 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:32.217 Test: blockdev comparev and writev ...[2024-04-17 14:38:40.686729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1e3e01000 len:0x1000 00:19:32.217 [2024-04-17 14:38:40.686940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:32.217 passed 00:19:32.217 Test: blockdev nvme passthru rw ...passed 00:19:32.217 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:38:40.687843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:19:32.217 [2024-04-17 14:38:40.688014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:19:32.217 00:19:32.217 Test: blockdev nvme admin passthru ...passed 00:19:32.217 Test: blockdev copy ...passed 00:19:32.217 Suite: bdevio tests on: Nvme1n1 00:19:32.217 Test: blockdev write read block ...passed 00:19:32.217 Test: blockdev write zeroes read block ...passed 00:19:32.217 Test: blockdev write zeroes read no split ...passed 00:19:32.217 Test: blockdev write zeroes read split ...passed 00:19:32.217 Test: blockdev write zeroes read split partial ...passed 00:19:32.217 Test: blockdev reset ...[2024-04-17 14:38:40.776107] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:19:32.217 [2024-04-17 14:38:40.780328] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:32.217 passed 00:19:32.217 Test: blockdev write read 8 blocks ...passed 00:19:32.217 Test: blockdev write read size > 128k ...passed 00:19:32.217 Test: blockdev write read invalid size ...passed 00:19:32.217 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:32.217 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:32.217 Test: blockdev write read max offset ...passed 00:19:32.217 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:32.217 Test: blockdev writev readv 8 blocks ...passed 00:19:32.217 Test: blockdev writev readv 30 x 1block ...passed 00:19:32.217 Test: blockdev writev readv block ...passed 00:19:32.217 Test: blockdev writev readv size > 128k ...passed 00:19:32.217 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:32.217 Test: blockdev comparev and writev ...[2024-04-17 14:38:40.790086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x203e06000 len:0x1000 00:19:32.217 [2024-04-17 14:38:40.790305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:32.217 passed 00:19:32.217 Test: blockdev nvme passthru rw ...passed 00:19:32.217 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:38:40.791591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:19:32.217 [2024-04-17 14:38:40.791774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:32.217 passed 00:19:32.217 Test: blockdev nvme admin passthru ...passed 00:19:32.217 Test: blockdev copy ...passed 00:19:32.217 Suite: bdevio tests on: Nvme0n1 00:19:32.217 Test: blockdev write read block ...passed 00:19:32.217 Test: blockdev write zeroes read block ...passed 00:19:32.217 Test: blockdev write zeroes read no split ...passed 00:19:32.571 Test: blockdev write zeroes read split ...passed 00:19:32.571 Test: blockdev write zeroes read split partial ...passed 00:19:32.571 Test: blockdev reset ...[2024-04-17 14:38:40.878359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:19:32.571 [2024-04-17 14:38:40.882695] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:32.571 passed 00:19:32.571 Test: blockdev write read 8 blocks ...passed 00:19:32.571 Test: blockdev write read size > 128k ...passed 00:19:32.571 Test: blockdev write read invalid size ...passed 00:19:32.571 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:32.571 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:32.571 Test: blockdev write read max offset ...passed 00:19:32.571 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:32.571 Test: blockdev writev readv 8 blocks ...passed 00:19:32.571 Test: blockdev writev readv 30 x 1block ...passed 00:19:32.571 Test: blockdev writev readv block ...passed 00:19:32.571 Test: blockdev writev readv size > 128k ...passed 00:19:32.571 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:32.571 Test: blockdev comparev and writev ...[2024-04-17 14:38:40.891342] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:19:32.571 separate metadata which is not supported yet. 00:19:32.571 passed 00:19:32.571 Test: blockdev nvme passthru rw ...passed 00:19:32.571 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:38:40.892459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:19:32.571 [2024-04-17 14:38:40.892660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:19:32.571 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:19:32.571 passed 00:19:32.571 Test: blockdev copy ...passed 00:19:32.571 00:19:32.571 Run Summary: Type Total Ran Passed Failed Inactive 00:19:32.571 suites 6 6 n/a 0 0 00:19:32.571 tests 138 138 138 0 0 00:19:32.571 asserts 893 893 893 0 n/a 00:19:32.571 00:19:32.571 Elapsed time = 1.991 seconds 00:19:32.571 0 00:19:32.571 14:38:40 -- bdev/blockdev.sh@295 -- # killprocess 66620 00:19:32.571 14:38:40 -- common/autotest_common.sh@936 -- # '[' -z 66620 ']' 00:19:32.571 14:38:40 -- common/autotest_common.sh@940 -- # kill -0 66620 00:19:32.571 14:38:40 -- common/autotest_common.sh@941 -- # uname 00:19:32.571 14:38:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:32.571 14:38:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66620 00:19:32.571 killing process with pid 66620 00:19:32.571 14:38:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:32.571 14:38:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:32.571 14:38:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66620' 00:19:32.571 14:38:40 -- common/autotest_common.sh@955 -- # kill 66620 00:19:32.571 14:38:40 -- common/autotest_common.sh@960 -- # wait 66620 00:19:33.946 ************************************ 00:19:33.946 END TEST bdev_bounds 00:19:33.946 ************************************ 00:19:33.946 14:38:42 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:19:33.946 00:19:33.946 real 0m3.601s 00:19:33.946 user 0m8.903s 00:19:33.946 sys 0m0.486s 00:19:33.946 14:38:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:33.946 14:38:42 -- common/autotest_common.sh@10 -- # set +x 00:19:33.946 14:38:42 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:19:33.946 14:38:42 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:33.946 14:38:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:33.946 14:38:42 -- common/autotest_common.sh@10 -- # set +x 00:19:33.946 ************************************ 00:19:33.946 START TEST bdev_nbd 00:19:33.946 ************************************ 00:19:33.946 14:38:42 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:19:33.946 14:38:42 -- bdev/blockdev.sh@300 -- # uname -s 00:19:33.946 14:38:42 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:19:33.946 14:38:42 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:33.946 14:38:42 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:33.946 14:38:42 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:33.946 14:38:42 -- bdev/blockdev.sh@304 -- # local bdev_all 00:19:33.946 14:38:42 -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:19:33.946 14:38:42 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:19:33.946 14:38:42 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:33.946 14:38:42 -- bdev/blockdev.sh@311 -- # local nbd_all 00:19:33.946 14:38:42 -- bdev/blockdev.sh@312 -- # bdev_num=6 00:19:33.946 14:38:42 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:33.946 14:38:42 -- bdev/blockdev.sh@314 -- # local nbd_list 00:19:33.946 14:38:42 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:33.946 14:38:42 -- bdev/blockdev.sh@315 -- # local bdev_list 00:19:33.946 14:38:42 -- bdev/blockdev.sh@318 -- # nbd_pid=66695 00:19:33.946 14:38:42 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:33.946 14:38:42 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:33.946 14:38:42 -- bdev/blockdev.sh@320 -- # waitforlisten 66695 /var/tmp/spdk-nbd.sock 00:19:33.946 14:38:42 -- common/autotest_common.sh@817 -- # '[' -z 66695 ']' 00:19:33.946 14:38:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:33.946 14:38:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:33.946 14:38:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:33.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:33.947 14:38:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:33.947 14:38:42 -- common/autotest_common.sh@10 -- # set +x 00:19:33.947 [2024-04-17 14:38:42.436866] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:33.947 [2024-04-17 14:38:42.437200] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.227 [2024-04-17 14:38:42.611022] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.489 [2024-04-17 14:38:42.945171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.423 14:38:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:35.423 14:38:43 -- common/autotest_common.sh@850 -- # return 0 00:19:35.423 14:38:43 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@24 -- # local i 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:35.423 14:38:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:19:35.423 14:38:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:35.423 14:38:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:35.423 14:38:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:35.423 14:38:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:19:35.423 14:38:44 -- common/autotest_common.sh@855 -- # local i 00:19:35.423 14:38:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:35.423 14:38:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:35.423 14:38:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:19:35.681 14:38:44 -- common/autotest_common.sh@859 -- # break 00:19:35.681 14:38:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:35.681 14:38:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:35.681 14:38:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:35.681 1+0 records in 00:19:35.681 1+0 records out 00:19:35.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641493 s, 6.4 MB/s 00:19:35.681 14:38:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.681 14:38:44 -- common/autotest_common.sh@872 -- # size=4096 00:19:35.681 14:38:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.681 14:38:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:35.681 14:38:44 -- common/autotest_common.sh@875 -- # return 0 00:19:35.681 14:38:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:35.681 14:38:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:35.681 14:38:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:19:35.938 14:38:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:35.938 14:38:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:35.938 14:38:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:35.938 14:38:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:19:35.938 14:38:44 -- common/autotest_common.sh@855 -- # local i 00:19:35.938 14:38:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:35.938 14:38:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:35.938 14:38:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:19:35.938 14:38:44 -- common/autotest_common.sh@859 -- # break 00:19:35.938 14:38:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:35.938 14:38:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:35.938 14:38:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:35.938 1+0 records in 00:19:35.938 1+0 records out 00:19:35.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807015 s, 5.1 MB/s 00:19:35.938 14:38:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.938 14:38:44 -- common/autotest_common.sh@872 -- # size=4096 00:19:35.938 14:38:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.938 14:38:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:35.938 14:38:44 -- common/autotest_common.sh@875 -- # return 0 00:19:35.938 14:38:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:35.938 14:38:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:35.938 14:38:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:19:36.197 14:38:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:36.197 14:38:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:36.197 14:38:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:36.197 14:38:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:19:36.197 14:38:44 -- common/autotest_common.sh@855 -- # local i 00:19:36.197 14:38:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:36.197 14:38:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:36.197 14:38:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:19:36.197 14:38:44 -- common/autotest_common.sh@859 -- # break 00:19:36.197 14:38:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:36.197 14:38:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:36.197 14:38:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:36.197 1+0 records in 00:19:36.197 1+0 records out 00:19:36.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000937343 s, 4.4 MB/s 00:19:36.197 14:38:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.197 14:38:44 -- common/autotest_common.sh@872 -- # size=4096 00:19:36.197 14:38:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.197 14:38:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:36.197 14:38:44 -- common/autotest_common.sh@875 -- # return 0 00:19:36.197 14:38:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:36.197 14:38:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:36.197 14:38:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:19:36.763 14:38:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:36.763 14:38:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:36.763 14:38:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:36.763 14:38:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:19:36.763 14:38:45 -- common/autotest_common.sh@855 -- # local i 00:19:36.763 14:38:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:36.763 14:38:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:36.763 14:38:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:19:36.763 14:38:45 -- common/autotest_common.sh@859 -- # break 00:19:36.763 14:38:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:36.763 14:38:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:36.763 14:38:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:36.763 1+0 records in 00:19:36.763 1+0 records out 00:19:36.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000868571 s, 4.7 MB/s 00:19:36.763 14:38:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.763 14:38:45 -- common/autotest_common.sh@872 -- # size=4096 00:19:36.763 14:38:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.763 14:38:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:36.763 14:38:45 -- common/autotest_common.sh@875 -- # return 0 00:19:36.763 14:38:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:36.763 14:38:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:36.763 14:38:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:19:37.022 14:38:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:37.022 14:38:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:37.022 14:38:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:37.022 14:38:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:19:37.022 14:38:45 -- common/autotest_common.sh@855 -- # local i 00:19:37.022 14:38:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:37.022 14:38:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:37.022 14:38:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:19:37.022 14:38:45 -- common/autotest_common.sh@859 -- # break 00:19:37.022 14:38:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:37.022 14:38:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:37.022 14:38:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:37.022 1+0 records in 00:19:37.022 1+0 records out 00:19:37.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000784692 s, 5.2 MB/s 00:19:37.022 14:38:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:37.022 14:38:45 -- common/autotest_common.sh@872 -- # size=4096 00:19:37.022 14:38:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:37.022 14:38:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:37.022 14:38:45 -- common/autotest_common.sh@875 -- # return 0 00:19:37.022 14:38:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:37.022 14:38:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:37.022 14:38:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:19:37.280 14:38:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:37.280 14:38:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:37.280 14:38:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:37.280 14:38:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:19:37.280 14:38:45 -- common/autotest_common.sh@855 -- # local i 00:19:37.280 14:38:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:37.280 14:38:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:37.280 14:38:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:19:37.280 14:38:45 -- common/autotest_common.sh@859 -- # break 00:19:37.280 14:38:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:37.280 14:38:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:37.280 14:38:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:37.280 1+0 records in 00:19:37.280 1+0 records out 00:19:37.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065151 s, 6.3 MB/s 00:19:37.280 14:38:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:37.280 14:38:45 -- common/autotest_common.sh@872 -- # size=4096 00:19:37.280 14:38:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:37.280 14:38:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:37.280 14:38:45 -- common/autotest_common.sh@875 -- # return 0 00:19:37.280 14:38:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:37.280 14:38:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:37.280 14:38:45 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:37.605 14:38:46 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:37.605 { 00:19:37.605 "nbd_device": "/dev/nbd0", 00:19:37.605 "bdev_name": "Nvme0n1" 00:19:37.605 }, 00:19:37.606 { 00:19:37.606 "nbd_device": "/dev/nbd1", 00:19:37.606 "bdev_name": "Nvme1n1" 00:19:37.606 }, 00:19:37.606 { 00:19:37.606 "nbd_device": "/dev/nbd2", 00:19:37.606 "bdev_name": "Nvme2n1" 00:19:37.606 }, 00:19:37.606 { 00:19:37.606 "nbd_device": "/dev/nbd3", 00:19:37.606 "bdev_name": "Nvme2n2" 00:19:37.606 }, 00:19:37.606 { 00:19:37.606 "nbd_device": "/dev/nbd4", 00:19:37.606 "bdev_name": "Nvme2n3" 00:19:37.606 }, 00:19:37.606 { 00:19:37.606 "nbd_device": "/dev/nbd5", 00:19:37.606 "bdev_name": "Nvme3n1" 00:19:37.606 } 00:19:37.606 ]' 00:19:37.606 14:38:46 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:37.606 14:38:46 -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:37.606 { 00:19:37.606 "nbd_device": "/dev/nbd0", 00:19:37.606 "bdev_name": "Nvme0n1" 00:19:37.606 }, 00:19:37.606 { 00:19:37.606 "nbd_device": "/dev/nbd1", 00:19:37.606 "bdev_name": "Nvme1n1" 00:19:37.606 }, 00:19:37.606 { 00:19:37.606 "nbd_device": "/dev/nbd2", 00:19:37.606 "bdev_name": "Nvme2n1" 00:19:37.606 }, 00:19:37.606 { 00:19:37.606 "nbd_device": "/dev/nbd3", 00:19:37.606 "bdev_name": "Nvme2n2" 00:19:37.606 }, 00:19:37.606 { 00:19:37.606 "nbd_device": "/dev/nbd4", 00:19:37.606 "bdev_name": "Nvme2n3" 00:19:37.606 }, 00:19:37.606 { 00:19:37.606 "nbd_device": "/dev/nbd5", 00:19:37.606 "bdev_name": "Nvme3n1" 00:19:37.606 } 00:19:37.606 ]' 00:19:37.606 14:38:46 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:37.606 14:38:46 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:37.606 14:38:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:37.606 14:38:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:37.606 14:38:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:37.606 14:38:46 -- bdev/nbd_common.sh@51 -- # local i 00:19:37.606 14:38:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:37.606 14:38:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:37.897 14:38:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:37.897 14:38:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:37.898 14:38:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:37.898 14:38:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:37.898 14:38:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:37.898 14:38:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:37.898 14:38:46 -- bdev/nbd_common.sh@41 -- # break 00:19:37.898 14:38:46 -- bdev/nbd_common.sh@45 -- # return 0 00:19:37.898 14:38:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:37.898 14:38:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:38.157 14:38:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:38.157 14:38:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:38.157 14:38:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:38.157 14:38:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.157 14:38:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.157 14:38:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:38.157 14:38:46 -- bdev/nbd_common.sh@41 -- # break 00:19:38.157 14:38:46 -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.157 14:38:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:38.157 14:38:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:38.415 14:38:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:38.415 14:38:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:38.415 14:38:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:38.415 14:38:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.415 14:38:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.415 14:38:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:38.415 14:38:46 -- bdev/nbd_common.sh@41 -- # break 00:19:38.415 14:38:46 -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.415 14:38:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:38.415 14:38:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:38.674 14:38:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:38.674 14:38:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:38.674 14:38:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:38.674 14:38:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.674 14:38:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.674 14:38:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:38.674 14:38:47 -- bdev/nbd_common.sh@41 -- # break 00:19:38.674 14:38:47 -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.674 14:38:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:38.674 14:38:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:38.932 14:38:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:38.932 14:38:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:38.932 14:38:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:38.933 14:38:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:38.933 14:38:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:38.933 14:38:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:38.933 14:38:47 -- bdev/nbd_common.sh@41 -- # break 00:19:38.933 14:38:47 -- bdev/nbd_common.sh@45 -- # return 0 00:19:38.933 14:38:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:38.933 14:38:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:39.212 14:38:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:39.212 14:38:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:39.212 14:38:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:39.212 14:38:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:39.212 14:38:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:39.212 14:38:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:39.212 14:38:47 -- bdev/nbd_common.sh@41 -- # break 00:19:39.212 14:38:47 -- bdev/nbd_common.sh@45 -- # return 0 00:19:39.212 14:38:47 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:39.212 14:38:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:39.212 14:38:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@65 -- # true 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@65 -- # count=0 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@122 -- # count=0 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:39.469 14:38:47 -- bdev/nbd_common.sh@127 -- # return 0 00:19:39.470 14:38:47 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@12 -- # local i 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:39.470 14:38:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:19:39.727 /dev/nbd0 00:19:39.727 14:38:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:39.727 14:38:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:39.727 14:38:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:19:39.727 14:38:48 -- common/autotest_common.sh@855 -- # local i 00:19:39.727 14:38:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:39.727 14:38:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:39.727 14:38:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:19:39.727 14:38:48 -- common/autotest_common.sh@859 -- # break 00:19:39.727 14:38:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:39.727 14:38:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:39.727 14:38:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.727 1+0 records in 00:19:39.727 1+0 records out 00:19:39.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788263 s, 5.2 MB/s 00:19:39.727 14:38:48 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.727 14:38:48 -- common/autotest_common.sh@872 -- # size=4096 00:19:39.727 14:38:48 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.727 14:38:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:39.727 14:38:48 -- common/autotest_common.sh@875 -- # return 0 00:19:39.727 14:38:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.727 14:38:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:39.727 14:38:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:19:39.985 /dev/nbd1 00:19:39.985 14:38:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:39.985 14:38:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:39.985 14:38:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:19:39.985 14:38:48 -- common/autotest_common.sh@855 -- # local i 00:19:39.985 14:38:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:39.985 14:38:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:39.985 14:38:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:19:39.985 14:38:48 -- common/autotest_common.sh@859 -- # break 00:19:39.985 14:38:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:39.985 14:38:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:39.985 14:38:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.985 1+0 records in 00:19:39.985 1+0 records out 00:19:39.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693107 s, 5.9 MB/s 00:19:39.985 14:38:48 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.985 14:38:48 -- common/autotest_common.sh@872 -- # size=4096 00:19:39.985 14:38:48 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.985 14:38:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:39.985 14:38:48 -- common/autotest_common.sh@875 -- # return 0 00:19:39.985 14:38:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.985 14:38:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:39.985 14:38:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:19:40.244 /dev/nbd10 00:19:40.244 14:38:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:40.244 14:38:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:40.244 14:38:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:19:40.244 14:38:48 -- common/autotest_common.sh@855 -- # local i 00:19:40.244 14:38:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:40.244 14:38:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:40.244 14:38:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:19:40.244 14:38:48 -- common/autotest_common.sh@859 -- # break 00:19:40.244 14:38:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:40.244 14:38:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:40.244 14:38:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.503 1+0 records in 00:19:40.503 1+0 records out 00:19:40.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010871 s, 3.8 MB/s 00:19:40.503 14:38:48 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.503 14:38:48 -- common/autotest_common.sh@872 -- # size=4096 00:19:40.503 14:38:48 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.503 14:38:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:40.503 14:38:48 -- common/autotest_common.sh@875 -- # return 0 00:19:40.503 14:38:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.503 14:38:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:40.503 14:38:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:19:40.761 /dev/nbd11 00:19:40.761 14:38:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:40.761 14:38:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:40.761 14:38:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:19:40.761 14:38:49 -- common/autotest_common.sh@855 -- # local i 00:19:40.761 14:38:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:40.761 14:38:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:40.761 14:38:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:19:40.761 14:38:49 -- common/autotest_common.sh@859 -- # break 00:19:40.761 14:38:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:40.761 14:38:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:40.761 14:38:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.761 1+0 records in 00:19:40.761 1+0 records out 00:19:40.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691633 s, 5.9 MB/s 00:19:40.761 14:38:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.761 14:38:49 -- common/autotest_common.sh@872 -- # size=4096 00:19:40.761 14:38:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.761 14:38:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:40.761 14:38:49 -- common/autotest_common.sh@875 -- # return 0 00:19:40.761 14:38:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.761 14:38:49 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:40.761 14:38:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:19:41.019 /dev/nbd12 00:19:41.019 14:38:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:41.019 14:38:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:41.019 14:38:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:19:41.019 14:38:49 -- common/autotest_common.sh@855 -- # local i 00:19:41.019 14:38:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:41.019 14:38:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:41.019 14:38:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:19:41.019 14:38:49 -- common/autotest_common.sh@859 -- # break 00:19:41.019 14:38:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:41.019 14:38:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:41.019 14:38:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.019 1+0 records in 00:19:41.019 1+0 records out 00:19:41.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115016 s, 3.6 MB/s 00:19:41.019 14:38:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.019 14:38:49 -- common/autotest_common.sh@872 -- # size=4096 00:19:41.019 14:38:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.019 14:38:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:41.019 14:38:49 -- common/autotest_common.sh@875 -- # return 0 00:19:41.019 14:38:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.019 14:38:49 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:41.019 14:38:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:19:41.277 /dev/nbd13 00:19:41.277 14:38:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:41.277 14:38:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:41.277 14:38:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:19:41.277 14:38:49 -- common/autotest_common.sh@855 -- # local i 00:19:41.277 14:38:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:41.277 14:38:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:41.277 14:38:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:19:41.277 14:38:49 -- common/autotest_common.sh@859 -- # break 00:19:41.277 14:38:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:41.277 14:38:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:41.277 14:38:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.534 1+0 records in 00:19:41.534 1+0 records out 00:19:41.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789847 s, 5.2 MB/s 00:19:41.534 14:38:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.534 14:38:49 -- common/autotest_common.sh@872 -- # size=4096 00:19:41.534 14:38:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.534 14:38:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:41.534 14:38:49 -- common/autotest_common.sh@875 -- # return 0 00:19:41.534 14:38:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.534 14:38:49 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:41.534 14:38:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:41.534 14:38:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:41.534 14:38:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd0", 00:19:41.793 "bdev_name": "Nvme0n1" 00:19:41.793 }, 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd1", 00:19:41.793 "bdev_name": "Nvme1n1" 00:19:41.793 }, 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd10", 00:19:41.793 "bdev_name": "Nvme2n1" 00:19:41.793 }, 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd11", 00:19:41.793 "bdev_name": "Nvme2n2" 00:19:41.793 }, 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd12", 00:19:41.793 "bdev_name": "Nvme2n3" 00:19:41.793 }, 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd13", 00:19:41.793 "bdev_name": "Nvme3n1" 00:19:41.793 } 00:19:41.793 ]' 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd0", 00:19:41.793 "bdev_name": "Nvme0n1" 00:19:41.793 }, 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd1", 00:19:41.793 "bdev_name": "Nvme1n1" 00:19:41.793 }, 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd10", 00:19:41.793 "bdev_name": "Nvme2n1" 00:19:41.793 }, 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd11", 00:19:41.793 "bdev_name": "Nvme2n2" 00:19:41.793 }, 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd12", 00:19:41.793 "bdev_name": "Nvme2n3" 00:19:41.793 }, 00:19:41.793 { 00:19:41.793 "nbd_device": "/dev/nbd13", 00:19:41.793 "bdev_name": "Nvme3n1" 00:19:41.793 } 00:19:41.793 ]' 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:41.793 /dev/nbd1 00:19:41.793 /dev/nbd10 00:19:41.793 /dev/nbd11 00:19:41.793 /dev/nbd12 00:19:41.793 /dev/nbd13' 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:41.793 /dev/nbd1 00:19:41.793 /dev/nbd10 00:19:41.793 /dev/nbd11 00:19:41.793 /dev/nbd12 00:19:41.793 /dev/nbd13' 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@65 -- # count=6 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@66 -- # echo 6 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@95 -- # count=6 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:41.793 256+0 records in 00:19:41.793 256+0 records out 00:19:41.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118964 s, 88.1 MB/s 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:41.793 256+0 records in 00:19:41.793 256+0 records out 00:19:41.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127616 s, 8.2 MB/s 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:41.793 14:38:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:42.052 256+0 records in 00:19:42.052 256+0 records out 00:19:42.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140272 s, 7.5 MB/s 00:19:42.052 14:38:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:42.052 14:38:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:42.320 256+0 records in 00:19:42.320 256+0 records out 00:19:42.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134756 s, 7.8 MB/s 00:19:42.320 14:38:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:42.320 14:38:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:42.320 256+0 records in 00:19:42.320 256+0 records out 00:19:42.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134226 s, 7.8 MB/s 00:19:42.320 14:38:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:42.320 14:38:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:42.582 256+0 records in 00:19:42.582 256+0 records out 00:19:42.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130505 s, 8.0 MB/s 00:19:42.582 14:38:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:42.582 14:38:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:42.582 256+0 records in 00:19:42.582 256+0 records out 00:19:42.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131129 s, 8.0 MB/s 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@51 -- # local i 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:42.582 14:38:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:42.840 14:38:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:42.840 14:38:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:42.840 14:38:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:42.840 14:38:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:42.840 14:38:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:42.840 14:38:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:42.840 14:38:51 -- bdev/nbd_common.sh@41 -- # break 00:19:42.840 14:38:51 -- bdev/nbd_common.sh@45 -- # return 0 00:19:42.840 14:38:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:42.840 14:38:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:43.099 14:38:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:43.099 14:38:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:43.099 14:38:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:43.099 14:38:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.099 14:38:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.099 14:38:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:43.099 14:38:51 -- bdev/nbd_common.sh@41 -- # break 00:19:43.099 14:38:51 -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.099 14:38:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:43.099 14:38:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:43.358 14:38:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:43.358 14:38:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:43.358 14:38:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:43.358 14:38:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.358 14:38:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.358 14:38:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:43.358 14:38:51 -- bdev/nbd_common.sh@41 -- # break 00:19:43.358 14:38:51 -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.358 14:38:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:43.358 14:38:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:43.922 14:38:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:43.922 14:38:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:43.922 14:38:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:43.922 14:38:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.922 14:38:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.922 14:38:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:43.922 14:38:52 -- bdev/nbd_common.sh@41 -- # break 00:19:43.922 14:38:52 -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.922 14:38:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:43.922 14:38:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:44.180 14:38:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:44.180 14:38:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:44.180 14:38:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:44.180 14:38:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:44.180 14:38:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:44.180 14:38:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:44.180 14:38:52 -- bdev/nbd_common.sh@41 -- # break 00:19:44.180 14:38:52 -- bdev/nbd_common.sh@45 -- # return 0 00:19:44.180 14:38:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:44.180 14:38:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:44.438 14:38:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:44.438 14:38:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:44.438 14:38:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:44.438 14:38:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:44.438 14:38:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:44.438 14:38:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:44.438 14:38:52 -- bdev/nbd_common.sh@41 -- # break 00:19:44.438 14:38:52 -- bdev/nbd_common.sh@45 -- # return 0 00:19:44.438 14:38:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:44.438 14:38:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:44.438 14:38:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@65 -- # true 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@65 -- # count=0 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@104 -- # count=0 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@109 -- # return 0 00:19:44.743 14:38:53 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:19:44.743 14:38:53 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:45.018 malloc_lvol_verify 00:19:45.018 14:38:53 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:45.275 c88af259-e7db-41c4-8775-010ac4531a17 00:19:45.275 14:38:53 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:45.532 fabd68f5-1f96-46ae-bc3e-bbd64e81d690 00:19:45.532 14:38:53 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:45.790 /dev/nbd0 00:19:45.790 14:38:54 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:19:45.790 mke2fs 1.46.5 (30-Dec-2021) 00:19:45.790 Discarding device blocks: 0/4096 done 00:19:45.790 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:45.790 00:19:45.790 Allocating group tables: 0/1 done 00:19:45.790 Writing inode tables: 0/1 done 00:19:45.790 Creating journal (1024 blocks): done 00:19:45.790 Writing superblocks and filesystem accounting information: 0/1 done 00:19:45.790 00:19:45.790 14:38:54 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:19:45.790 14:38:54 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:45.790 14:38:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:45.790 14:38:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:45.790 14:38:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:45.790 14:38:54 -- bdev/nbd_common.sh@51 -- # local i 00:19:45.790 14:38:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.790 14:38:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:46.069 14:38:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:46.069 14:38:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:46.069 14:38:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:46.069 14:38:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.069 14:38:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.069 14:38:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:46.069 14:38:54 -- bdev/nbd_common.sh@41 -- # break 00:19:46.069 14:38:54 -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.069 14:38:54 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:19:46.069 14:38:54 -- bdev/nbd_common.sh@147 -- # return 0 00:19:46.069 14:38:54 -- bdev/blockdev.sh@326 -- # killprocess 66695 00:19:46.069 14:38:54 -- common/autotest_common.sh@936 -- # '[' -z 66695 ']' 00:19:46.069 14:38:54 -- common/autotest_common.sh@940 -- # kill -0 66695 00:19:46.069 14:38:54 -- common/autotest_common.sh@941 -- # uname 00:19:46.069 14:38:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.069 14:38:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66695 00:19:46.069 14:38:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:46.069 14:38:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:46.069 14:38:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66695' 00:19:46.069 killing process with pid 66695 00:19:46.069 14:38:54 -- common/autotest_common.sh@955 -- # kill 66695 00:19:46.069 14:38:54 -- common/autotest_common.sh@960 -- # wait 66695 00:19:47.974 14:38:56 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:19:47.974 00:19:47.974 real 0m13.905s 00:19:47.974 user 0m18.522s 00:19:47.974 sys 0m5.119s 00:19:47.974 14:38:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:47.974 14:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:47.974 ************************************ 00:19:47.974 END TEST bdev_nbd 00:19:47.974 ************************************ 00:19:47.974 14:38:56 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:19:47.974 14:38:56 -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:19:47.974 14:38:56 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:19:47.974 skipping fio tests on NVMe due to multi-ns failures. 00:19:47.974 14:38:56 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:47.974 14:38:56 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:47.974 14:38:56 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:19:47.974 14:38:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:47.974 14:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:47.974 ************************************ 00:19:47.974 START TEST bdev_verify 00:19:47.974 ************************************ 00:19:47.974 14:38:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:47.974 [2024-04-17 14:38:56.482658] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:47.974 [2024-04-17 14:38:56.483033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67113 ] 00:19:48.232 [2024-04-17 14:38:56.667222] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:48.490 [2024-04-17 14:38:57.010205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.490 [2024-04-17 14:38:57.010244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.490 [2024-04-17 14:38:57.060601] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:19:49.425 [2024-04-17 14:38:57.757435] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:19:49.425 Running I/O for 5 seconds... 00:19:54.738 00:19:54.738 Latency(us) 00:19:54.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.738 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0x0 length 0xbd0bd 00:19:54.738 Nvme0n1 : 5.08 1448.63 5.66 0.00 0.00 87876.02 11047.50 82887.44 00:19:54.738 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:54.738 Nvme0n1 : 5.04 1471.87 5.75 0.00 0.00 86634.68 15229.32 82388.11 00:19:54.738 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0x0 length 0xa0000 00:19:54.738 Nvme1n1 : 5.08 1448.06 5.66 0.00 0.00 87736.02 10797.84 79392.18 00:19:54.738 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0xa0000 length 0xa0000 00:19:54.738 Nvme1n1 : 5.05 1471.46 5.75 0.00 0.00 86486.73 17725.93 79392.18 00:19:54.738 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0x0 length 0x80000 00:19:54.738 Nvme2n1 : 5.10 1456.62 5.69 0.00 0.00 87327.65 10360.93 75397.61 00:19:54.738 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0x80000 length 0x80000 00:19:54.738 Nvme2n1 : 5.09 1484.42 5.80 0.00 0.00 85605.24 12233.39 76396.25 00:19:54.738 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0x0 length 0x80000 00:19:54.738 Nvme2n2 : 5.10 1456.10 5.69 0.00 0.00 87219.49 10673.01 74398.96 00:19:54.738 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0x80000 length 0x80000 00:19:54.738 Nvme2n2 : 5.09 1483.95 5.80 0.00 0.00 85452.06 12046.14 72401.68 00:19:54.738 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0x0 length 0x80000 00:19:54.738 Nvme2n3 : 5.10 1455.59 5.69 0.00 0.00 87055.97 10985.08 79392.18 00:19:54.738 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0x80000 length 0x80000 00:19:54.738 Nvme2n3 : 5.09 1483.46 5.79 0.00 0.00 85303.21 11671.65 77394.90 00:19:54.738 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0x0 length 0x20000 00:19:54.738 Nvme3n1 : 5.10 1455.07 5.68 0.00 0.00 86910.82 10048.85 83386.76 00:19:54.738 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:54.738 Verification LBA range: start 0x20000 length 0x20000 00:19:54.738 Nvme3n1 : 5.09 1482.97 5.79 0.00 0.00 85181.71 11109.91 82388.11 00:19:54.738 =================================================================================================================== 00:19:54.738 Total : 17598.20 68.74 0.00 0.00 86557.27 10048.85 83386.76 00:19:56.638 00:19:56.638 real 0m8.428s 00:19:56.638 user 0m15.110s 00:19:56.638 sys 0m0.343s 00:19:56.638 14:39:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:56.638 ************************************ 00:19:56.638 END TEST bdev_verify 00:19:56.638 ************************************ 00:19:56.638 14:39:04 -- common/autotest_common.sh@10 -- # set +x 00:19:56.638 14:39:04 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:56.638 14:39:04 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:19:56.638 14:39:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.638 14:39:04 -- common/autotest_common.sh@10 -- # set +x 00:19:56.638 ************************************ 00:19:56.638 START TEST bdev_verify_big_io 00:19:56.638 ************************************ 00:19:56.638 14:39:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:56.638 [2024-04-17 14:39:05.069525] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:19:56.638 [2024-04-17 14:39:05.070221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67226 ] 00:19:56.896 [2024-04-17 14:39:05.260607] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:57.155 [2024-04-17 14:39:05.571056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.155 [2024-04-17 14:39:05.571068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.155 [2024-04-17 14:39:05.621542] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:19:58.089 [2024-04-17 14:39:06.359348] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:19:58.089 Running I/O for 5 seconds... 00:20:04.671 00:20:04.671 Latency(us) 00:20:04.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.672 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0x0 length 0xbd0b 00:20:04.672 Nvme0n1 : 5.65 131.61 8.23 0.00 0.00 936240.61 13544.11 1278264.08 00:20:04.672 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:04.672 Nvme0n1 : 5.63 134.23 8.39 0.00 0.00 923889.43 16227.96 966687.21 00:20:04.672 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0x0 length 0xa000 00:20:04.672 Nvme1n1 : 5.66 132.25 8.27 0.00 0.00 907128.13 26588.89 1294242.38 00:20:04.672 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0xa000 length 0xa000 00:20:04.672 Nvme1n1 : 5.63 136.31 8.52 0.00 0.00 890876.26 55674.39 926741.46 00:20:04.672 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0x0 length 0x8000 00:20:04.672 Nvme2n1 : 5.84 135.75 8.48 0.00 0.00 851505.04 44938.97 1310220.68 00:20:04.672 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0x8000 length 0x8000 00:20:04.672 Nvme2n1 : 5.64 136.23 8.51 0.00 0.00 866215.90 100863.02 854839.10 00:20:04.672 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0x0 length 0x8000 00:20:04.672 Nvme2n2 : 5.84 139.62 8.73 0.00 0.00 809123.46 63164.22 1206361.72 00:20:04.672 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0x8000 length 0x8000 00:20:04.672 Nvme2n2 : 5.72 138.17 8.64 0.00 0.00 825475.81 77894.22 826877.07 00:20:04.672 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0x0 length 0x8000 00:20:04.672 Nvme2n3 : 5.87 149.40 9.34 0.00 0.00 742347.64 3776.12 1366144.73 00:20:04.672 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0x8000 length 0x8000 00:20:04.672 Nvme2n3 : 5.84 149.92 9.37 0.00 0.00 743309.63 31082.79 906768.58 00:20:04.672 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0x0 length 0x2000 00:20:04.672 Nvme3n1 : 5.89 160.92 10.06 0.00 0.00 671399.37 10673.01 1086524.46 00:20:04.672 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:04.672 Verification LBA range: start 0x2000 length 0x2000 00:20:04.672 Nvme3n1 : 5.85 164.21 10.26 0.00 0.00 665148.25 2512.21 986660.08 00:20:04.672 =================================================================================================================== 00:20:04.672 Total : 1708.62 106.79 0.00 0.00 811461.95 2512.21 1366144.73 00:20:06.572 ************************************ 00:20:06.572 END TEST bdev_verify_big_io 00:20:06.572 ************************************ 00:20:06.572 00:20:06.572 real 0m9.952s 00:20:06.572 user 0m17.837s 00:20:06.572 sys 0m0.386s 00:20:06.572 14:39:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:06.572 14:39:14 -- common/autotest_common.sh@10 -- # set +x 00:20:06.572 14:39:14 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.572 14:39:14 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:20:06.572 14:39:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:06.572 14:39:14 -- common/autotest_common.sh@10 -- # set +x 00:20:06.572 ************************************ 00:20:06.572 START TEST bdev_write_zeroes 00:20:06.572 ************************************ 00:20:06.572 14:39:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:06.572 [2024-04-17 14:39:15.154568] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:20:06.572 [2024-04-17 14:39:15.154994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67357 ] 00:20:06.830 [2024-04-17 14:39:15.338252] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.088 [2024-04-17 14:39:15.601295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.088 [2024-04-17 14:39:15.651341] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:20:08.036 [2024-04-17 14:39:16.326812] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:20:08.036 Running I/O for 1 seconds... 00:20:08.970 00:20:08.970 Latency(us) 00:20:08.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.970 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.970 Nvme0n1 : 1.02 7764.72 30.33 0.00 0.00 16437.82 6397.56 79392.18 00:20:08.970 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.970 Nvme1n1 : 1.02 7829.04 30.58 0.00 0.00 16276.25 11858.90 63413.88 00:20:08.970 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.970 Nvme2n1 : 1.02 7815.08 30.53 0.00 0.00 16220.83 12108.56 63913.20 00:20:08.970 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.970 Nvme2n2 : 1.03 7803.29 30.48 0.00 0.00 16179.03 10360.93 65411.17 00:20:08.970 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.970 Nvme2n3 : 1.03 7791.30 30.43 0.00 0.00 16173.41 10298.51 65411.17 00:20:08.970 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:08.970 Nvme3n1 : 1.03 7779.53 30.39 0.00 0.00 16164.33 9799.19 65910.49 00:20:08.970 =================================================================================================================== 00:20:08.970 Total : 46782.96 182.75 0.00 0.00 16241.61 6397.56 79392.18 00:20:10.486 00:20:10.486 real 0m3.855s 00:20:10.486 user 0m3.427s 00:20:10.486 sys 0m0.299s 00:20:10.486 14:39:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:10.486 ************************************ 00:20:10.486 END TEST bdev_write_zeroes 00:20:10.486 ************************************ 00:20:10.486 14:39:18 -- common/autotest_common.sh@10 -- # set +x 00:20:10.486 14:39:18 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.486 14:39:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:20:10.486 14:39:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:10.486 14:39:18 -- common/autotest_common.sh@10 -- # set +x 00:20:10.486 ************************************ 00:20:10.486 START TEST bdev_json_nonenclosed 00:20:10.486 ************************************ 00:20:10.486 14:39:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:10.825 [2024-04-17 14:39:19.159430] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:20:10.825 [2024-04-17 14:39:19.159816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67425 ] 00:20:10.825 [2024-04-17 14:39:19.344864] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.107 [2024-04-17 14:39:19.669448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.107 [2024-04-17 14:39:19.669788] json_config.c: 582:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:11.107 [2024-04-17 14:39:19.669969] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:11.107 [2024-04-17 14:39:19.670033] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:11.720 ************************************ 00:20:11.720 END TEST bdev_json_nonenclosed 00:20:11.720 ************************************ 00:20:11.720 00:20:11.720 real 0m1.142s 00:20:11.720 user 0m0.841s 00:20:11.720 sys 0m0.190s 00:20:11.720 14:39:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:11.720 14:39:20 -- common/autotest_common.sh@10 -- # set +x 00:20:11.720 14:39:20 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:11.720 14:39:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:20:11.720 14:39:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:11.720 14:39:20 -- common/autotest_common.sh@10 -- # set +x 00:20:11.978 ************************************ 00:20:11.978 START TEST bdev_json_nonarray 00:20:11.978 ************************************ 00:20:11.978 14:39:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:11.978 [2024-04-17 14:39:20.391719] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:20:11.978 [2024-04-17 14:39:20.392035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67461 ] 00:20:11.978 [2024-04-17 14:39:20.557988] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.238 [2024-04-17 14:39:20.837949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.238 [2024-04-17 14:39:20.838334] json_config.c: 588:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:12.238 [2024-04-17 14:39:20.838559] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:12.238 [2024-04-17 14:39:20.838653] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:12.807 ************************************ 00:20:12.807 END TEST bdev_json_nonarray 00:20:12.807 ************************************ 00:20:12.807 00:20:12.807 real 0m1.045s 00:20:12.807 user 0m0.758s 00:20:12.807 sys 0m0.177s 00:20:12.807 14:39:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:12.807 14:39:21 -- common/autotest_common.sh@10 -- # set +x 00:20:12.807 14:39:21 -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:20:12.807 14:39:21 -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:20:12.807 14:39:21 -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:20:12.807 14:39:21 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:20:12.807 14:39:21 -- bdev/blockdev.sh@811 -- # cleanup 00:20:12.807 14:39:21 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:12.807 14:39:21 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:12.807 14:39:21 -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:20:12.807 14:39:21 -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:20:12.807 14:39:21 -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:20:12.807 14:39:21 -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:20:12.807 00:20:12.807 real 0m51.132s 00:20:12.807 user 1m13.408s 00:20:12.807 sys 0m8.640s 00:20:12.807 14:39:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:12.807 ************************************ 00:20:12.807 END TEST blockdev_nvme 00:20:12.807 ************************************ 00:20:12.807 14:39:21 -- common/autotest_common.sh@10 -- # set +x 00:20:13.065 14:39:21 -- spdk/autotest.sh@208 -- # uname -s 00:20:13.065 14:39:21 -- spdk/autotest.sh@208 -- # [[ Linux == Linux ]] 00:20:13.065 14:39:21 -- spdk/autotest.sh@209 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:20:13.065 14:39:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:13.065 14:39:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:13.065 14:39:21 -- common/autotest_common.sh@10 -- # set +x 00:20:13.065 ************************************ 00:20:13.065 START TEST blockdev_nvme_gpt 00:20:13.065 ************************************ 00:20:13.065 14:39:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:20:13.065 * Looking for test storage... 00:20:13.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:13.065 14:39:21 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:13.065 14:39:21 -- bdev/nbd_common.sh@6 -- # set -e 00:20:13.065 14:39:21 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:13.065 14:39:21 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:13.065 14:39:21 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:13.065 14:39:21 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:13.065 14:39:21 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:13.066 14:39:21 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:13.066 14:39:21 -- bdev/blockdev.sh@20 -- # : 00:20:13.066 14:39:21 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:20:13.066 14:39:21 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:20:13.066 14:39:21 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:20:13.066 14:39:21 -- bdev/blockdev.sh@674 -- # uname -s 00:20:13.066 14:39:21 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:20:13.066 14:39:21 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:20:13.066 14:39:21 -- bdev/blockdev.sh@682 -- # test_type=gpt 00:20:13.066 14:39:21 -- bdev/blockdev.sh@683 -- # crypto_device= 00:20:13.066 14:39:21 -- bdev/blockdev.sh@684 -- # dek= 00:20:13.066 14:39:21 -- bdev/blockdev.sh@685 -- # env_ctx= 00:20:13.066 14:39:21 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:20:13.066 14:39:21 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:20:13.066 14:39:21 -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:20:13.066 14:39:21 -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:20:13.066 14:39:21 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:20:13.066 14:39:21 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67542 00:20:13.066 14:39:21 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:13.066 14:39:21 -- bdev/blockdev.sh@49 -- # waitforlisten 67542 00:20:13.066 14:39:21 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:13.066 14:39:21 -- common/autotest_common.sh@817 -- # '[' -z 67542 ']' 00:20:13.066 14:39:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.066 14:39:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:13.066 14:39:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.066 14:39:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:13.066 14:39:21 -- common/autotest_common.sh@10 -- # set +x 00:20:13.325 [2024-04-17 14:39:21.762136] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:20:13.325 [2024-04-17 14:39:21.762443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67542 ] 00:20:13.584 [2024-04-17 14:39:21.934765] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.842 [2024-04-17 14:39:22.260751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.775 14:39:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:14.775 14:39:23 -- common/autotest_common.sh@850 -- # return 0 00:20:14.775 14:39:23 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:20:14.775 14:39:23 -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:20:14.775 14:39:23 -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:15.342 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:15.342 Waiting for block devices as requested 00:20:15.600 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:15.600 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:15.857 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:15.857 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:21.124 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:21.124 14:39:29 -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:20:21.124 14:39:29 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:20:21.124 14:39:29 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:20:21.124 14:39:29 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:20:21.124 14:39:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:21.124 14:39:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:20:21.124 14:39:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:21.124 14:39:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:21.124 14:39:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:20:21.124 14:39:29 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:21.124 14:39:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:21.124 14:39:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:20:21.124 14:39:29 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:20:21.124 14:39:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:21.124 14:39:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:20:21.124 14:39:29 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:20:21.124 14:39:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:21.124 14:39:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:20:21.124 14:39:29 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:20:21.124 14:39:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:21.124 14:39:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:20:21.124 14:39:29 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:20:21.124 14:39:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:21.124 14:39:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:20:21.124 14:39:29 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:20:21.124 14:39:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:21.124 14:39:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:21.124 14:39:29 -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:20:21.124 14:39:29 -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:20:21.124 14:39:29 -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:20:21.124 14:39:29 -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:20:21.124 14:39:29 -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:20:21.124 14:39:29 -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:20:21.124 14:39:29 -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:20:21.124 14:39:29 -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:20:21.124 BYT; 00:20:21.124 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:20:21.124 14:39:29 -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:20:21.124 BYT; 00:20:21.124 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:20:21.124 14:39:29 -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:20:21.124 14:39:29 -- bdev/blockdev.sh@116 -- # break 00:20:21.124 14:39:29 -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:20:21.124 14:39:29 -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:20:21.124 14:39:29 -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:20:21.124 14:39:29 -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:20:21.124 14:39:29 -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:20:21.124 14:39:29 -- scripts/common.sh@408 -- # local spdk_guid 00:20:21.124 14:39:29 -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:20:21.124 14:39:29 -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:21.124 14:39:29 -- scripts/common.sh@413 -- # IFS='()' 00:20:21.124 14:39:29 -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:20:21.125 14:39:29 -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:21.382 14:39:29 -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:20:21.382 14:39:29 -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:21.382 14:39:29 -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:21.382 14:39:29 -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:21.382 14:39:29 -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:20:21.382 14:39:29 -- scripts/common.sh@420 -- # local spdk_guid 00:20:21.382 14:39:29 -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:20:21.382 14:39:29 -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:21.382 14:39:29 -- scripts/common.sh@425 -- # IFS='()' 00:20:21.382 14:39:29 -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:20:21.382 14:39:29 -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:21.382 14:39:29 -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:20:21.382 14:39:29 -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:21.382 14:39:29 -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:21.382 14:39:29 -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:21.382 14:39:29 -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:20:22.338 The operation has completed successfully. 00:20:22.338 14:39:30 -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:20:23.711 The operation has completed successfully. 00:20:23.711 14:39:31 -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:23.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:24.534 lsblk: /dev/nvme3c3n1: not a block device 00:20:24.792 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:24.792 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:24.792 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:24.792 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:24.792 14:39:33 -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:20:24.792 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.792 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:24.792 [] 00:20:24.792 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.792 14:39:33 -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:20:24.792 14:39:33 -- bdev/blockdev.sh@81 -- # local json 00:20:24.792 14:39:33 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:20:24.792 14:39:33 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:25.082 14:39:33 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:20:25.082 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.082 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.341 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.341 14:39:33 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:20:25.341 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.341 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.341 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.341 14:39:33 -- bdev/blockdev.sh@740 -- # cat 00:20:25.341 14:39:33 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:20:25.341 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.341 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.341 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.341 14:39:33 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:20:25.341 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.341 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.341 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.341 14:39:33 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:25.341 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.341 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.341 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.341 14:39:33 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:20:25.341 14:39:33 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:20:25.341 14:39:33 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:20:25.341 14:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.341 14:39:33 -- common/autotest_common.sh@10 -- # set +x 00:20:25.341 14:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.341 14:39:33 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:20:25.341 14:39:33 -- bdev/blockdev.sh@749 -- # jq -r .name 00:20:25.342 14:39:33 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ae1d3808-bf13-4c57-a357-333af1ad592e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ae1d3808-bf13-4c57-a357-333af1ad592e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a998bed3-8701-48a4-a882-f46bf8892941"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a998bed3-8701-48a4-a882-f46bf8892941",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7b5a51de-6f26-493f-a5f0-193c3b3c0c80"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7b5a51de-6f26-493f-a5f0-193c3b3c0c80",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "80b043a1-5726-44f1-919d-3a0b31b3bffc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "80b043a1-5726-44f1-919d-3a0b31b3bffc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "031f167f-d4f5-4329-add1-a6b7c61461fa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "031f167f-d4f5-4329-add1-a6b7c61461fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:20:25.342 14:39:33 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:20:25.342 14:39:33 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:20:25.342 14:39:33 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:20:25.342 14:39:33 -- bdev/blockdev.sh@754 -- # killprocess 67542 00:20:25.342 14:39:33 -- common/autotest_common.sh@936 -- # '[' -z 67542 ']' 00:20:25.342 14:39:33 -- common/autotest_common.sh@940 -- # kill -0 67542 00:20:25.342 14:39:33 -- common/autotest_common.sh@941 -- # uname 00:20:25.342 14:39:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:25.342 14:39:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67542 00:20:25.342 14:39:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:25.342 14:39:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:25.342 14:39:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67542' 00:20:25.342 killing process with pid 67542 00:20:25.342 14:39:33 -- common/autotest_common.sh@955 -- # kill 67542 00:20:25.342 14:39:33 -- common/autotest_common.sh@960 -- # wait 67542 00:20:28.632 14:39:36 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:28.632 14:39:36 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:20:28.632 14:39:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:20:28.632 14:39:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:28.632 14:39:36 -- common/autotest_common.sh@10 -- # set +x 00:20:28.632 ************************************ 00:20:28.632 START TEST bdev_hello_world 00:20:28.632 ************************************ 00:20:28.632 14:39:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:20:28.632 [2024-04-17 14:39:37.022786] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:20:28.632 [2024-04-17 14:39:37.023309] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68225 ] 00:20:28.632 [2024-04-17 14:39:37.210855] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.890 [2024-04-17 14:39:37.480677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.826 [2024-04-17 14:39:38.242522] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:29.826 [2024-04-17 14:39:38.242798] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:20:29.826 [2024-04-17 14:39:38.242881] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:29.826 [2024-04-17 14:39:38.246728] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:29.826 [2024-04-17 14:39:38.247308] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:29.826 [2024-04-17 14:39:38.247453] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:29.826 [2024-04-17 14:39:38.247743] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:29.826 00:20:29.826 [2024-04-17 14:39:38.247881] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:31.207 00:20:31.207 real 0m2.848s 00:20:31.207 user 0m2.433s 00:20:31.207 sys 0m0.298s 00:20:31.207 ************************************ 00:20:31.207 END TEST bdev_hello_world 00:20:31.207 ************************************ 00:20:31.207 14:39:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:31.207 14:39:39 -- common/autotest_common.sh@10 -- # set +x 00:20:31.207 14:39:39 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:20:31.466 14:39:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:31.466 14:39:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:31.466 14:39:39 -- common/autotest_common.sh@10 -- # set +x 00:20:31.466 ************************************ 00:20:31.466 START TEST bdev_bounds 00:20:31.466 ************************************ 00:20:31.466 14:39:39 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:20:31.466 14:39:39 -- bdev/blockdev.sh@290 -- # bdevio_pid=68282 00:20:31.466 14:39:39 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:31.466 Process bdevio pid: 68282 00:20:31.466 14:39:39 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:31.466 14:39:39 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68282' 00:20:31.466 14:39:39 -- bdev/blockdev.sh@293 -- # waitforlisten 68282 00:20:31.466 14:39:39 -- common/autotest_common.sh@817 -- # '[' -z 68282 ']' 00:20:31.466 14:39:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.466 14:39:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:31.466 14:39:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.466 14:39:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:31.466 14:39:39 -- common/autotest_common.sh@10 -- # set +x 00:20:31.466 [2024-04-17 14:39:39.976949] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:20:31.466 [2024-04-17 14:39:39.977296] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68282 ] 00:20:31.725 [2024-04-17 14:39:40.149680] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:31.984 [2024-04-17 14:39:40.428784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.984 [2024-04-17 14:39:40.428811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.984 [2024-04-17 14:39:40.428814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.919 14:39:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:32.919 14:39:41 -- common/autotest_common.sh@850 -- # return 0 00:20:32.919 14:39:41 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:32.919 I/O targets: 00:20:32.919 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:20:32.919 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:20:32.919 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:20:32.919 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:32.919 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:32.919 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:32.919 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:20:32.919 00:20:32.919 00:20:32.919 CUnit - A unit testing framework for C - Version 2.1-3 00:20:32.919 http://cunit.sourceforge.net/ 00:20:32.919 00:20:32.919 00:20:32.919 Suite: bdevio tests on: Nvme3n1 00:20:32.919 Test: blockdev write read block ...passed 00:20:32.919 Test: blockdev write zeroes read block ...passed 00:20:32.919 Test: blockdev write zeroes read no split ...passed 00:20:32.919 Test: blockdev write zeroes read split ...passed 00:20:32.919 Test: blockdev write zeroes read split partial ...passed 00:20:32.919 Test: blockdev reset ...[2024-04-17 14:39:41.462384] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:20:32.919 [2024-04-17 14:39:41.466788] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:32.919 passed 00:20:32.919 Test: blockdev write read 8 blocks ...passed 00:20:32.919 Test: blockdev write read size > 128k ...passed 00:20:32.919 Test: blockdev write read invalid size ...passed 00:20:32.919 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:32.919 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:32.919 Test: blockdev write read max offset ...passed 00:20:32.919 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:32.919 Test: blockdev writev readv 8 blocks ...passed 00:20:32.919 Test: blockdev writev readv 30 x 1block ...passed 00:20:32.919 Test: blockdev writev readv block ...passed 00:20:32.919 Test: blockdev writev readv size > 128k ...passed 00:20:32.919 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:32.919 Test: blockdev comparev and writev ...[2024-04-17 14:39:41.477152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1e480a000 len:0x1000 00:20:32.919 [2024-04-17 14:39:41.477387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:32.919 passed 00:20:32.919 Test: blockdev nvme passthru rw ...passed 00:20:32.919 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:39:41.478568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:32.919 [2024-04-17 14:39:41.478759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:20:32.919 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:20:32.919 passed 00:20:32.919 Test: blockdev copy ...passed 00:20:32.919 Suite: bdevio tests on: Nvme2n3 00:20:32.919 Test: blockdev write read block ...passed 00:20:32.919 Test: blockdev write zeroes read block ...passed 00:20:32.919 Test: blockdev write zeroes read no split ...passed 00:20:33.178 Test: blockdev write zeroes read split ...passed 00:20:33.178 Test: blockdev write zeroes read split partial ...passed 00:20:33.178 Test: blockdev reset ...[2024-04-17 14:39:41.566335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:20:33.178 [2024-04-17 14:39:41.570786] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:33.178 passed 00:20:33.178 Test: blockdev write read 8 blocks ...passed 00:20:33.178 Test: blockdev write read size > 128k ...passed 00:20:33.178 Test: blockdev write read invalid size ...passed 00:20:33.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:33.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:33.178 Test: blockdev write read max offset ...passed 00:20:33.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:33.178 Test: blockdev writev readv 8 blocks ...passed 00:20:33.178 Test: blockdev writev readv 30 x 1block ...passed 00:20:33.178 Test: blockdev writev readv block ...passed 00:20:33.178 Test: blockdev writev readv size > 128k ...passed 00:20:33.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:33.178 Test: blockdev comparev and writev ...[2024-04-17 14:39:41.579914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1dcf04000 len:0x1000 00:20:33.178 [2024-04-17 14:39:41.580125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:33.178 passed 00:20:33.178 Test: blockdev nvme passthru rw ...passed 00:20:33.178 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:39:41.581198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:33.178 [2024-04-17 14:39:41.581379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:20:33.178 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:20:33.178 passed 00:20:33.178 Test: blockdev copy ...passed 00:20:33.178 Suite: bdevio tests on: Nvme2n2 00:20:33.178 Test: blockdev write read block ...passed 00:20:33.178 Test: blockdev write zeroes read block ...passed 00:20:33.178 Test: blockdev write zeroes read no split ...passed 00:20:33.178 Test: blockdev write zeroes read split ...passed 00:20:33.178 Test: blockdev write zeroes read split partial ...passed 00:20:33.178 Test: blockdev reset ...[2024-04-17 14:39:41.667917] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:20:33.178 [2024-04-17 14:39:41.672278] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:33.178 passed 00:20:33.178 Test: blockdev write read 8 blocks ...passed 00:20:33.178 Test: blockdev write read size > 128k ...passed 00:20:33.178 Test: blockdev write read invalid size ...passed 00:20:33.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:33.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:33.178 Test: blockdev write read max offset ...passed 00:20:33.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:33.178 Test: blockdev writev readv 8 blocks ...passed 00:20:33.178 Test: blockdev writev readv 30 x 1block ...passed 00:20:33.178 Test: blockdev writev readv block ...passed 00:20:33.178 Test: blockdev writev readv size > 128k ...passed 00:20:33.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:33.178 Test: blockdev comparev and writev ...[2024-04-17 14:39:41.682080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1dcf04000 len:0x1000 00:20:33.178 [2024-04-17 14:39:41.682283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:33.178 passed 00:20:33.178 Test: blockdev nvme passthru rw ...passed 00:20:33.178 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:39:41.683426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:33.178 [2024-04-17 14:39:41.683601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:20:33.178 00:20:33.178 Test: blockdev nvme admin passthru ...passed 00:20:33.178 Test: blockdev copy ...passed 00:20:33.178 Suite: bdevio tests on: Nvme2n1 00:20:33.178 Test: blockdev write read block ...passed 00:20:33.178 Test: blockdev write zeroes read block ...passed 00:20:33.178 Test: blockdev write zeroes read no split ...passed 00:20:33.178 Test: blockdev write zeroes read split ...passed 00:20:33.178 Test: blockdev write zeroes read split partial ...passed 00:20:33.179 Test: blockdev reset ...[2024-04-17 14:39:41.768878] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:20:33.179 [2024-04-17 14:39:41.774932] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:33.179 passed 00:20:33.179 Test: blockdev write read 8 blocks ...passed 00:20:33.179 Test: blockdev write read size > 128k ...passed 00:20:33.179 Test: blockdev write read invalid size ...passed 00:20:33.179 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:33.179 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:33.179 Test: blockdev write read max offset ...passed 00:20:33.179 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:33.179 Test: blockdev writev readv 8 blocks ...passed 00:20:33.437 Test: blockdev writev readv 30 x 1block ...passed 00:20:33.437 Test: blockdev writev readv block ...passed 00:20:33.437 Test: blockdev writev readv size > 128k ...passed 00:20:33.437 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:33.437 Test: blockdev comparev and writev ...[2024-04-17 14:39:41.786076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1d7c3c000 len:0x1000 00:20:33.438 [2024-04-17 14:39:41.786289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:33.438 passed 00:20:33.438 Test: blockdev nvme passthru rw ...passed 00:20:33.438 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:39:41.787565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:33.438 [2024-04-17 14:39:41.787679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:20:33.438 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:20:33.438 passed 00:20:33.438 Test: blockdev copy ...passed 00:20:33.438 Suite: bdevio tests on: Nvme1n1 00:20:33.438 Test: blockdev write read block ...passed 00:20:33.438 Test: blockdev write zeroes read block ...passed 00:20:33.438 Test: blockdev write zeroes read no split ...passed 00:20:33.438 Test: blockdev write zeroes read split ...passed 00:20:33.438 Test: blockdev write zeroes read split partial ...passed 00:20:33.438 Test: blockdev reset ...[2024-04-17 14:39:41.888275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:20:33.438 [2024-04-17 14:39:41.892885] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:33.438 passed 00:20:33.438 Test: blockdev write read 8 blocks ...passed 00:20:33.438 Test: blockdev write read size > 128k ...passed 00:20:33.438 Test: blockdev write read invalid size ...passed 00:20:33.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:33.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:33.438 Test: blockdev write read max offset ...passed 00:20:33.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:33.438 Test: blockdev writev readv 8 blocks ...passed 00:20:33.438 Test: blockdev writev readv 30 x 1block ...passed 00:20:33.438 Test: blockdev writev readv block ...passed 00:20:33.438 Test: blockdev writev readv size > 128k ...passed 00:20:33.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:33.438 Test: blockdev comparev and writev ...[2024-04-17 14:39:41.906086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1d7c38000 len:0x1000 00:20:33.438 [2024-04-17 14:39:41.906296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:33.438 passed 00:20:33.438 Test: blockdev nvme passthru rw ...passed 00:20:33.438 Test: blockdev nvme passthru vendor specific ...[2024-04-17 14:39:41.907447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:33.438 [2024-04-17 14:39:41.907616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:20:33.438 00:20:33.438 Test: blockdev nvme admin passthru ...passed 00:20:33.438 Test: blockdev copy ...passed 00:20:33.438 Suite: bdevio tests on: Nvme0n1p2 00:20:33.438 Test: blockdev write read block ...passed 00:20:33.438 Test: blockdev write zeroes read block ...passed 00:20:33.438 Test: blockdev write zeroes read no split ...passed 00:20:33.438 Test: blockdev write zeroes read split ...passed 00:20:33.438 Test: blockdev write zeroes read split partial ...passed 00:20:33.438 Test: blockdev reset ...[2024-04-17 14:39:42.010359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:20:33.438 [2024-04-17 14:39:42.014593] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:33.438 passed 00:20:33.438 Test: blockdev write read 8 blocks ...passed 00:20:33.438 Test: blockdev write read size > 128k ...passed 00:20:33.438 Test: blockdev write read invalid size ...passed 00:20:33.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:33.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:33.438 Test: blockdev write read max offset ...passed 00:20:33.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:33.438 Test: blockdev writev readv 8 blocks ...passed 00:20:33.438 Test: blockdev writev readv 30 x 1block ...passed 00:20:33.438 Test: blockdev writev readv block ...passed 00:20:33.438 Test: blockdev writev readv size > 128k ...passed 00:20:33.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:33.438 Test: blockdev comparev and writev ...[2024-04-17 14:39:42.023811] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:20:33.438 separate metadata which is not supported yet. 00:20:33.438 passed 00:20:33.438 Test: blockdev nvme passthru rw ...passed 00:20:33.438 Test: blockdev nvme passthru vendor specific ...passed 00:20:33.438 Test: blockdev nvme admin passthru ...passed 00:20:33.438 Test: blockdev copy ...passed 00:20:33.438 Suite: bdevio tests on: Nvme0n1p1 00:20:33.438 Test: blockdev write read block ...passed 00:20:33.438 Test: blockdev write zeroes read block ...passed 00:20:33.438 Test: blockdev write zeroes read no split ...passed 00:20:33.696 Test: blockdev write zeroes read split ...passed 00:20:33.696 Test: blockdev write zeroes read split partial ...passed 00:20:33.696 Test: blockdev reset ...[2024-04-17 14:39:42.100623] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:20:33.696 [2024-04-17 14:39:42.104707] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:33.696 passed 00:20:33.696 Test: blockdev write read 8 blocks ...passed 00:20:33.696 Test: blockdev write read size > 128k ...passed 00:20:33.696 Test: blockdev write read invalid size ...passed 00:20:33.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:33.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:33.696 Test: blockdev write read max offset ...passed 00:20:33.696 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:33.696 Test: blockdev writev readv 8 blocks ...passed 00:20:33.696 Test: blockdev writev readv 30 x 1block ...passed 00:20:33.696 Test: blockdev writev readv block ...passed 00:20:33.696 Test: blockdev writev readv size > 128k ...passed 00:20:33.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:33.696 Test: blockdev comparev and writev ...[2024-04-17 14:39:42.113484] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1passed 00:20:33.696 Test: blockdev nvme passthru rw ... since it has 00:20:33.696 separate metadata which is not supported yet. 00:20:33.696 passed 00:20:33.696 Test: blockdev nvme passthru vendor specific ...passed 00:20:33.696 Test: blockdev nvme admin passthru ...passed 00:20:33.696 Test: blockdev copy ...passed 00:20:33.696 00:20:33.696 Run Summary: Type Total Ran Passed Failed Inactive 00:20:33.696 suites 7 7 n/a 0 0 00:20:33.696 tests 161 161 161 0 0 00:20:33.696 asserts 1006 1006 1006 0 n/a 00:20:33.696 00:20:33.696 Elapsed time = 2.076 seconds 00:20:33.696 0 00:20:33.696 14:39:42 -- bdev/blockdev.sh@295 -- # killprocess 68282 00:20:33.696 14:39:42 -- common/autotest_common.sh@936 -- # '[' -z 68282 ']' 00:20:33.696 14:39:42 -- common/autotest_common.sh@940 -- # kill -0 68282 00:20:33.696 14:39:42 -- common/autotest_common.sh@941 -- # uname 00:20:33.696 14:39:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:33.696 14:39:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68282 00:20:33.696 14:39:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:33.696 killing process with pid 68282 00:20:33.696 14:39:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:33.696 14:39:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68282' 00:20:33.696 14:39:42 -- common/autotest_common.sh@955 -- # kill 68282 00:20:33.696 14:39:42 -- common/autotest_common.sh@960 -- # wait 68282 00:20:35.071 ************************************ 00:20:35.071 END TEST bdev_bounds 00:20:35.071 ************************************ 00:20:35.071 14:39:43 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:20:35.071 00:20:35.071 real 0m3.511s 00:20:35.071 user 0m8.691s 00:20:35.071 sys 0m0.423s 00:20:35.071 14:39:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:35.071 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.071 14:39:43 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:20:35.071 14:39:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:35.071 14:39:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:35.071 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.071 ************************************ 00:20:35.071 START TEST bdev_nbd 00:20:35.071 ************************************ 00:20:35.071 14:39:43 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:20:35.071 14:39:43 -- bdev/blockdev.sh@300 -- # uname -s 00:20:35.071 14:39:43 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:20:35.071 14:39:43 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:35.071 14:39:43 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:35.071 14:39:43 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:35.071 14:39:43 -- bdev/blockdev.sh@304 -- # local bdev_all 00:20:35.071 14:39:43 -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:20:35.071 14:39:43 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:20:35.071 14:39:43 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:35.071 14:39:43 -- bdev/blockdev.sh@311 -- # local nbd_all 00:20:35.071 14:39:43 -- bdev/blockdev.sh@312 -- # bdev_num=7 00:20:35.071 14:39:43 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:35.071 14:39:43 -- bdev/blockdev.sh@314 -- # local nbd_list 00:20:35.071 14:39:43 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:35.071 14:39:43 -- bdev/blockdev.sh@315 -- # local bdev_list 00:20:35.071 14:39:43 -- bdev/blockdev.sh@318 -- # nbd_pid=68357 00:20:35.071 14:39:43 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:35.071 14:39:43 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:35.071 14:39:43 -- bdev/blockdev.sh@320 -- # waitforlisten 68357 /var/tmp/spdk-nbd.sock 00:20:35.071 14:39:43 -- common/autotest_common.sh@817 -- # '[' -z 68357 ']' 00:20:35.071 14:39:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:35.071 14:39:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.071 14:39:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:35.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:35.071 14:39:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.071 14:39:43 -- common/autotest_common.sh@10 -- # set +x 00:20:35.071 [2024-04-17 14:39:43.624217] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:20:35.071 [2024-04-17 14:39:43.624564] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.330 [2024-04-17 14:39:43.796477] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.588 [2024-04-17 14:39:44.121774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.530 14:39:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:36.530 14:39:44 -- common/autotest_common.sh@850 -- # return 0 00:20:36.530 14:39:44 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@24 -- # local i 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:36.530 14:39:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:20:36.788 14:39:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:36.788 14:39:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:36.788 14:39:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:36.788 14:39:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:20:36.788 14:39:45 -- common/autotest_common.sh@855 -- # local i 00:20:36.788 14:39:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:36.788 14:39:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:36.788 14:39:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:20:36.788 14:39:45 -- common/autotest_common.sh@859 -- # break 00:20:36.788 14:39:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:36.788 14:39:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:36.788 14:39:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:36.788 1+0 records in 00:20:36.788 1+0 records out 00:20:36.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574443 s, 7.1 MB/s 00:20:36.788 14:39:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:36.788 14:39:45 -- common/autotest_common.sh@872 -- # size=4096 00:20:36.788 14:39:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:36.788 14:39:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:36.788 14:39:45 -- common/autotest_common.sh@875 -- # return 0 00:20:36.788 14:39:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:36.788 14:39:45 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:36.788 14:39:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:20:37.046 14:39:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:37.046 14:39:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:37.046 14:39:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:37.046 14:39:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:20:37.046 14:39:45 -- common/autotest_common.sh@855 -- # local i 00:20:37.046 14:39:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:37.046 14:39:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:37.046 14:39:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:20:37.046 14:39:45 -- common/autotest_common.sh@859 -- # break 00:20:37.046 14:39:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:37.046 14:39:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:37.046 14:39:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:37.046 1+0 records in 00:20:37.046 1+0 records out 00:20:37.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000902522 s, 4.5 MB/s 00:20:37.046 14:39:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.046 14:39:45 -- common/autotest_common.sh@872 -- # size=4096 00:20:37.046 14:39:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.046 14:39:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:37.046 14:39:45 -- common/autotest_common.sh@875 -- # return 0 00:20:37.046 14:39:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:37.046 14:39:45 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:37.046 14:39:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:20:37.303 14:39:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:20:37.304 14:39:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:20:37.304 14:39:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:20:37.304 14:39:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:20:37.304 14:39:45 -- common/autotest_common.sh@855 -- # local i 00:20:37.304 14:39:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:37.304 14:39:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:37.304 14:39:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:20:37.304 14:39:45 -- common/autotest_common.sh@859 -- # break 00:20:37.304 14:39:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:37.304 14:39:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:37.304 14:39:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:37.304 1+0 records in 00:20:37.304 1+0 records out 00:20:37.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656686 s, 6.2 MB/s 00:20:37.304 14:39:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.304 14:39:45 -- common/autotest_common.sh@872 -- # size=4096 00:20:37.304 14:39:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.304 14:39:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:37.304 14:39:45 -- common/autotest_common.sh@875 -- # return 0 00:20:37.304 14:39:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:37.304 14:39:45 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:37.304 14:39:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:20:37.561 14:39:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:20:37.561 14:39:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:20:37.561 14:39:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:20:37.561 14:39:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:20:37.561 14:39:46 -- common/autotest_common.sh@855 -- # local i 00:20:37.561 14:39:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:37.561 14:39:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:37.561 14:39:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:20:37.561 14:39:46 -- common/autotest_common.sh@859 -- # break 00:20:37.561 14:39:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:37.561 14:39:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:37.561 14:39:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:37.561 1+0 records in 00:20:37.561 1+0 records out 00:20:37.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000991985 s, 4.1 MB/s 00:20:37.561 14:39:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.561 14:39:46 -- common/autotest_common.sh@872 -- # size=4096 00:20:37.561 14:39:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.561 14:39:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:37.561 14:39:46 -- common/autotest_common.sh@875 -- # return 0 00:20:37.561 14:39:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:37.561 14:39:46 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:37.561 14:39:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:20:38.128 14:39:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:20:38.128 14:39:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:20:38.128 14:39:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:20:38.128 14:39:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:20:38.128 14:39:46 -- common/autotest_common.sh@855 -- # local i 00:20:38.128 14:39:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:38.128 14:39:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:38.128 14:39:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:20:38.128 14:39:46 -- common/autotest_common.sh@859 -- # break 00:20:38.128 14:39:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:38.128 14:39:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:38.128 14:39:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.128 1+0 records in 00:20:38.128 1+0 records out 00:20:38.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000881213 s, 4.6 MB/s 00:20:38.128 14:39:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.128 14:39:46 -- common/autotest_common.sh@872 -- # size=4096 00:20:38.128 14:39:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.128 14:39:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:38.128 14:39:46 -- common/autotest_common.sh@875 -- # return 0 00:20:38.128 14:39:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:38.128 14:39:46 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:38.128 14:39:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:20:38.128 14:39:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:20:38.128 14:39:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:20:38.386 14:39:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:20:38.386 14:39:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:20:38.386 14:39:46 -- common/autotest_common.sh@855 -- # local i 00:20:38.386 14:39:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:38.386 14:39:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:38.386 14:39:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:20:38.386 14:39:46 -- common/autotest_common.sh@859 -- # break 00:20:38.386 14:39:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:38.386 14:39:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:38.386 14:39:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.386 1+0 records in 00:20:38.386 1+0 records out 00:20:38.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000836267 s, 4.9 MB/s 00:20:38.386 14:39:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.386 14:39:46 -- common/autotest_common.sh@872 -- # size=4096 00:20:38.386 14:39:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.386 14:39:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:38.386 14:39:46 -- common/autotest_common.sh@875 -- # return 0 00:20:38.386 14:39:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:38.386 14:39:46 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:38.386 14:39:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:20:38.644 14:39:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:20:38.644 14:39:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:20:38.644 14:39:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:20:38.644 14:39:47 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:20:38.644 14:39:47 -- common/autotest_common.sh@855 -- # local i 00:20:38.644 14:39:47 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:38.644 14:39:47 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:38.644 14:39:47 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:20:38.644 14:39:47 -- common/autotest_common.sh@859 -- # break 00:20:38.644 14:39:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:38.644 14:39:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:38.644 14:39:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.644 1+0 records in 00:20:38.644 1+0 records out 00:20:38.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107772 s, 3.8 MB/s 00:20:38.644 14:39:47 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.644 14:39:47 -- common/autotest_common.sh@872 -- # size=4096 00:20:38.644 14:39:47 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.644 14:39:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:38.644 14:39:47 -- common/autotest_common.sh@875 -- # return 0 00:20:38.644 14:39:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:38.644 14:39:47 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:20:38.644 14:39:47 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:38.902 14:39:47 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd0", 00:20:38.902 "bdev_name": "Nvme0n1p1" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd1", 00:20:38.902 "bdev_name": "Nvme0n1p2" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd2", 00:20:38.902 "bdev_name": "Nvme1n1" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd3", 00:20:38.902 "bdev_name": "Nvme2n1" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd4", 00:20:38.902 "bdev_name": "Nvme2n2" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd5", 00:20:38.902 "bdev_name": "Nvme2n3" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd6", 00:20:38.902 "bdev_name": "Nvme3n1" 00:20:38.902 } 00:20:38.902 ]' 00:20:38.902 14:39:47 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:38.902 14:39:47 -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd0", 00:20:38.902 "bdev_name": "Nvme0n1p1" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd1", 00:20:38.902 "bdev_name": "Nvme0n1p2" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd2", 00:20:38.902 "bdev_name": "Nvme1n1" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd3", 00:20:38.902 "bdev_name": "Nvme2n1" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd4", 00:20:38.902 "bdev_name": "Nvme2n2" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd5", 00:20:38.902 "bdev_name": "Nvme2n3" 00:20:38.902 }, 00:20:38.902 { 00:20:38.902 "nbd_device": "/dev/nbd6", 00:20:38.902 "bdev_name": "Nvme3n1" 00:20:38.902 } 00:20:38.902 ]' 00:20:38.902 14:39:47 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:38.902 14:39:47 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:20:38.902 14:39:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:38.902 14:39:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:20:38.902 14:39:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:38.902 14:39:47 -- bdev/nbd_common.sh@51 -- # local i 00:20:38.902 14:39:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.903 14:39:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:39.161 14:39:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:39.161 14:39:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:39.161 14:39:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:39.161 14:39:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:39.161 14:39:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:39.161 14:39:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:39.161 14:39:47 -- bdev/nbd_common.sh@41 -- # break 00:20:39.161 14:39:47 -- bdev/nbd_common.sh@45 -- # return 0 00:20:39.161 14:39:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:39.161 14:39:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:39.418 14:39:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:39.418 14:39:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:39.418 14:39:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:39.418 14:39:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:39.418 14:39:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:39.418 14:39:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:39.418 14:39:47 -- bdev/nbd_common.sh@41 -- # break 00:20:39.418 14:39:47 -- bdev/nbd_common.sh@45 -- # return 0 00:20:39.418 14:39:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:39.418 14:39:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:20:39.676 14:39:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:20:39.676 14:39:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:20:39.676 14:39:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:20:39.676 14:39:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:39.676 14:39:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:39.676 14:39:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:20:39.676 14:39:48 -- bdev/nbd_common.sh@41 -- # break 00:20:39.676 14:39:48 -- bdev/nbd_common.sh@45 -- # return 0 00:20:39.676 14:39:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:39.676 14:39:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:20:39.933 14:39:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:20:39.933 14:39:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:20:39.934 14:39:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:20:39.934 14:39:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:39.934 14:39:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:39.934 14:39:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:20:39.934 14:39:48 -- bdev/nbd_common.sh@41 -- # break 00:20:39.934 14:39:48 -- bdev/nbd_common.sh@45 -- # return 0 00:20:39.934 14:39:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:39.934 14:39:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:20:40.191 14:39:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:20:40.191 14:39:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:20:40.191 14:39:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:20:40.191 14:39:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:40.191 14:39:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:40.191 14:39:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:20:40.191 14:39:48 -- bdev/nbd_common.sh@41 -- # break 00:20:40.191 14:39:48 -- bdev/nbd_common.sh@45 -- # return 0 00:20:40.191 14:39:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:40.191 14:39:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:20:40.449 14:39:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:20:40.449 14:39:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:20:40.449 14:39:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:20:40.449 14:39:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:40.449 14:39:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:40.449 14:39:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:20:40.449 14:39:48 -- bdev/nbd_common.sh@41 -- # break 00:20:40.449 14:39:48 -- bdev/nbd_common.sh@45 -- # return 0 00:20:40.449 14:39:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:40.449 14:39:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:20:40.707 14:39:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:20:40.707 14:39:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:20:40.707 14:39:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:20:40.707 14:39:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:40.707 14:39:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:40.707 14:39:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:20:40.707 14:39:49 -- bdev/nbd_common.sh@41 -- # break 00:20:40.707 14:39:49 -- bdev/nbd_common.sh@45 -- # return 0 00:20:40.707 14:39:49 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:40.707 14:39:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:40.707 14:39:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@65 -- # true 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@65 -- # count=0 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@122 -- # count=0 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@127 -- # return 0 00:20:40.966 14:39:49 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@12 -- # local i 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:40.966 14:39:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:20:41.225 /dev/nbd0 00:20:41.225 14:39:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:41.225 14:39:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:41.225 14:39:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:20:41.225 14:39:49 -- common/autotest_common.sh@855 -- # local i 00:20:41.225 14:39:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:41.225 14:39:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:41.225 14:39:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:20:41.225 14:39:49 -- common/autotest_common.sh@859 -- # break 00:20:41.225 14:39:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:41.225 14:39:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:41.225 14:39:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:41.225 1+0 records in 00:20:41.225 1+0 records out 00:20:41.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547302 s, 7.5 MB/s 00:20:41.225 14:39:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.225 14:39:49 -- common/autotest_common.sh@872 -- # size=4096 00:20:41.225 14:39:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.225 14:39:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:41.225 14:39:49 -- common/autotest_common.sh@875 -- # return 0 00:20:41.225 14:39:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:41.225 14:39:49 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:41.225 14:39:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:20:41.484 /dev/nbd1 00:20:41.484 14:39:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:41.484 14:39:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:41.484 14:39:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:20:41.484 14:39:49 -- common/autotest_common.sh@855 -- # local i 00:20:41.484 14:39:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:41.484 14:39:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:41.484 14:39:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:20:41.484 14:39:49 -- common/autotest_common.sh@859 -- # break 00:20:41.484 14:39:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:41.484 14:39:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:41.484 14:39:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:41.484 1+0 records in 00:20:41.484 1+0 records out 00:20:41.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563218 s, 7.3 MB/s 00:20:41.484 14:39:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.484 14:39:49 -- common/autotest_common.sh@872 -- # size=4096 00:20:41.484 14:39:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.484 14:39:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:41.484 14:39:49 -- common/autotest_common.sh@875 -- # return 0 00:20:41.484 14:39:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:41.484 14:39:49 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:41.484 14:39:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:20:41.742 /dev/nbd10 00:20:41.742 14:39:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:41.742 14:39:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:41.742 14:39:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:20:41.742 14:39:50 -- common/autotest_common.sh@855 -- # local i 00:20:41.742 14:39:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:41.742 14:39:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:41.742 14:39:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:20:41.742 14:39:50 -- common/autotest_common.sh@859 -- # break 00:20:41.742 14:39:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:41.742 14:39:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:41.742 14:39:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:41.742 1+0 records in 00:20:41.742 1+0 records out 00:20:41.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000888025 s, 4.6 MB/s 00:20:41.742 14:39:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.742 14:39:50 -- common/autotest_common.sh@872 -- # size=4096 00:20:41.742 14:39:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.742 14:39:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:41.742 14:39:50 -- common/autotest_common.sh@875 -- # return 0 00:20:41.742 14:39:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:41.742 14:39:50 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:41.742 14:39:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:20:42.000 /dev/nbd11 00:20:42.000 14:39:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:42.000 14:39:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:42.000 14:39:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:20:42.000 14:39:50 -- common/autotest_common.sh@855 -- # local i 00:20:42.000 14:39:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:42.000 14:39:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:42.000 14:39:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:20:42.000 14:39:50 -- common/autotest_common.sh@859 -- # break 00:20:42.000 14:39:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:42.000 14:39:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:42.000 14:39:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:42.000 1+0 records in 00:20:42.000 1+0 records out 00:20:42.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536213 s, 7.6 MB/s 00:20:42.000 14:39:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.000 14:39:50 -- common/autotest_common.sh@872 -- # size=4096 00:20:42.000 14:39:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.000 14:39:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:42.000 14:39:50 -- common/autotest_common.sh@875 -- # return 0 00:20:42.000 14:39:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:42.000 14:39:50 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:42.000 14:39:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:20:42.258 /dev/nbd12 00:20:42.516 14:39:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:42.516 14:39:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:42.516 14:39:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:20:42.516 14:39:50 -- common/autotest_common.sh@855 -- # local i 00:20:42.516 14:39:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:42.516 14:39:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:42.516 14:39:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:20:42.516 14:39:50 -- common/autotest_common.sh@859 -- # break 00:20:42.516 14:39:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:42.516 14:39:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:42.516 14:39:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:42.516 1+0 records in 00:20:42.516 1+0 records out 00:20:42.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680726 s, 6.0 MB/s 00:20:42.516 14:39:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.516 14:39:50 -- common/autotest_common.sh@872 -- # size=4096 00:20:42.516 14:39:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.516 14:39:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:42.516 14:39:50 -- common/autotest_common.sh@875 -- # return 0 00:20:42.516 14:39:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:42.516 14:39:50 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:42.516 14:39:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:20:42.774 /dev/nbd13 00:20:42.774 14:39:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:42.774 14:39:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:42.774 14:39:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:20:42.774 14:39:51 -- common/autotest_common.sh@855 -- # local i 00:20:42.774 14:39:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:42.774 14:39:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:42.774 14:39:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:20:42.774 14:39:51 -- common/autotest_common.sh@859 -- # break 00:20:42.774 14:39:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:42.774 14:39:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:42.774 14:39:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:42.774 1+0 records in 00:20:42.774 1+0 records out 00:20:42.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121802 s, 3.4 MB/s 00:20:42.774 14:39:51 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.774 14:39:51 -- common/autotest_common.sh@872 -- # size=4096 00:20:42.774 14:39:51 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:42.774 14:39:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:42.774 14:39:51 -- common/autotest_common.sh@875 -- # return 0 00:20:42.774 14:39:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:42.774 14:39:51 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:42.774 14:39:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:20:43.032 /dev/nbd14 00:20:43.032 14:39:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:20:43.032 14:39:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:20:43.032 14:39:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:20:43.032 14:39:51 -- common/autotest_common.sh@855 -- # local i 00:20:43.032 14:39:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:43.032 14:39:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:43.032 14:39:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:20:43.032 14:39:51 -- common/autotest_common.sh@859 -- # break 00:20:43.032 14:39:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:43.032 14:39:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:43.032 14:39:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.032 1+0 records in 00:20:43.032 1+0 records out 00:20:43.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672799 s, 6.1 MB/s 00:20:43.032 14:39:51 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.032 14:39:51 -- common/autotest_common.sh@872 -- # size=4096 00:20:43.032 14:39:51 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.032 14:39:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:43.032 14:39:51 -- common/autotest_common.sh@875 -- # return 0 00:20:43.032 14:39:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:43.032 14:39:51 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:43.032 14:39:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:43.032 14:39:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:43.032 14:39:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:43.290 14:39:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd0", 00:20:43.290 "bdev_name": "Nvme0n1p1" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd1", 00:20:43.290 "bdev_name": "Nvme0n1p2" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd10", 00:20:43.290 "bdev_name": "Nvme1n1" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd11", 00:20:43.290 "bdev_name": "Nvme2n1" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd12", 00:20:43.290 "bdev_name": "Nvme2n2" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd13", 00:20:43.290 "bdev_name": "Nvme2n3" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd14", 00:20:43.290 "bdev_name": "Nvme3n1" 00:20:43.290 } 00:20:43.290 ]' 00:20:43.290 14:39:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd0", 00:20:43.290 "bdev_name": "Nvme0n1p1" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd1", 00:20:43.290 "bdev_name": "Nvme0n1p2" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd10", 00:20:43.290 "bdev_name": "Nvme1n1" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd11", 00:20:43.290 "bdev_name": "Nvme2n1" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd12", 00:20:43.290 "bdev_name": "Nvme2n2" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd13", 00:20:43.290 "bdev_name": "Nvme2n3" 00:20:43.290 }, 00:20:43.290 { 00:20:43.290 "nbd_device": "/dev/nbd14", 00:20:43.290 "bdev_name": "Nvme3n1" 00:20:43.290 } 00:20:43.290 ]' 00:20:43.290 14:39:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:43.290 14:39:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:43.290 /dev/nbd1 00:20:43.290 /dev/nbd10 00:20:43.290 /dev/nbd11 00:20:43.290 /dev/nbd12 00:20:43.290 /dev/nbd13 00:20:43.290 /dev/nbd14' 00:20:43.290 14:39:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:43.290 /dev/nbd1 00:20:43.290 /dev/nbd10 00:20:43.290 /dev/nbd11 00:20:43.290 /dev/nbd12 00:20:43.290 /dev/nbd13 00:20:43.290 /dev/nbd14' 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@65 -- # count=7 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@66 -- # echo 7 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@95 -- # count=7 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:43.291 256+0 records in 00:20:43.291 256+0 records out 00:20:43.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113669 s, 92.2 MB/s 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:43.291 14:39:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:43.548 256+0 records in 00:20:43.548 256+0 records out 00:20:43.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142514 s, 7.4 MB/s 00:20:43.548 14:39:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:43.548 14:39:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:43.548 256+0 records in 00:20:43.548 256+0 records out 00:20:43.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147197 s, 7.1 MB/s 00:20:43.548 14:39:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:43.548 14:39:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:43.806 256+0 records in 00:20:43.806 256+0 records out 00:20:43.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145676 s, 7.2 MB/s 00:20:43.806 14:39:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:43.806 14:39:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:43.806 256+0 records in 00:20:43.806 256+0 records out 00:20:43.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147301 s, 7.1 MB/s 00:20:43.806 14:39:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:43.806 14:39:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:44.065 256+0 records in 00:20:44.065 256+0 records out 00:20:44.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144336 s, 7.3 MB/s 00:20:44.065 14:39:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:44.065 14:39:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:44.323 256+0 records in 00:20:44.323 256+0 records out 00:20:44.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142436 s, 7.4 MB/s 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:20:44.323 256+0 records in 00:20:44.323 256+0 records out 00:20:44.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14698 s, 7.1 MB/s 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@51 -- # local i 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:44.323 14:39:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:44.583 14:39:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:44.583 14:39:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:44.583 14:39:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:44.583 14:39:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:44.583 14:39:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:44.583 14:39:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:44.583 14:39:53 -- bdev/nbd_common.sh@41 -- # break 00:20:44.583 14:39:53 -- bdev/nbd_common.sh@45 -- # return 0 00:20:44.583 14:39:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:44.583 14:39:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:45.149 14:39:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:45.149 14:39:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:45.149 14:39:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:45.149 14:39:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.149 14:39:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.149 14:39:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:45.149 14:39:53 -- bdev/nbd_common.sh@41 -- # break 00:20:45.149 14:39:53 -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.149 14:39:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.149 14:39:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:45.407 14:39:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:45.407 14:39:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:45.407 14:39:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:45.407 14:39:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.407 14:39:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.407 14:39:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:45.407 14:39:53 -- bdev/nbd_common.sh@41 -- # break 00:20:45.407 14:39:53 -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.407 14:39:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.407 14:39:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:45.665 14:39:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:45.665 14:39:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:45.665 14:39:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:45.665 14:39:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.665 14:39:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.665 14:39:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:45.665 14:39:54 -- bdev/nbd_common.sh@41 -- # break 00:20:45.665 14:39:54 -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.665 14:39:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.665 14:39:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:45.922 14:39:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:45.922 14:39:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:45.922 14:39:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:45.922 14:39:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.922 14:39:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.922 14:39:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:45.922 14:39:54 -- bdev/nbd_common.sh@41 -- # break 00:20:45.922 14:39:54 -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.922 14:39:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.922 14:39:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:46.179 14:39:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:46.179 14:39:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:46.179 14:39:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:46.179 14:39:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.179 14:39:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.180 14:39:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:46.180 14:39:54 -- bdev/nbd_common.sh@41 -- # break 00:20:46.180 14:39:54 -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.180 14:39:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.180 14:39:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:20:46.438 14:39:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:20:46.438 14:39:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:20:46.438 14:39:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:20:46.438 14:39:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.438 14:39:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.438 14:39:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:20:46.438 14:39:54 -- bdev/nbd_common.sh@41 -- # break 00:20:46.438 14:39:54 -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.438 14:39:54 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:46.438 14:39:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:46.438 14:39:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@65 -- # true 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@65 -- # count=0 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@104 -- # count=0 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@109 -- # return 0 00:20:46.696 14:39:55 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:20:46.696 14:39:55 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:46.955 malloc_lvol_verify 00:20:46.955 14:39:55 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:47.214 8a8857b2-b6cf-4e2c-8e7b-bb2db6038a31 00:20:47.214 14:39:55 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:47.473 d7c8614a-6f0a-49d2-916b-166117d545b7 00:20:47.473 14:39:55 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:47.730 /dev/nbd0 00:20:47.730 14:39:56 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:20:47.730 mke2fs 1.46.5 (30-Dec-2021) 00:20:47.730 Discarding device blocks: 0/4096 done 00:20:47.730 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:47.730 00:20:47.730 Allocating group tables: 0/1 done 00:20:47.730 Writing inode tables: 0/1 done 00:20:47.730 Creating journal (1024 blocks): done 00:20:47.730 Writing superblocks and filesystem accounting information: 0/1 done 00:20:47.730 00:20:47.730 14:39:56 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:20:47.730 14:39:56 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:47.730 14:39:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:47.730 14:39:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:47.730 14:39:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:47.730 14:39:56 -- bdev/nbd_common.sh@51 -- # local i 00:20:47.730 14:39:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:47.730 14:39:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:47.988 14:39:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:47.988 14:39:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:47.988 14:39:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:47.988 14:39:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:47.988 14:39:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:47.988 14:39:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:47.988 14:39:56 -- bdev/nbd_common.sh@41 -- # break 00:20:47.988 14:39:56 -- bdev/nbd_common.sh@45 -- # return 0 00:20:47.988 14:39:56 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:20:47.988 14:39:56 -- bdev/nbd_common.sh@147 -- # return 0 00:20:47.988 14:39:56 -- bdev/blockdev.sh@326 -- # killprocess 68357 00:20:47.989 14:39:56 -- common/autotest_common.sh@936 -- # '[' -z 68357 ']' 00:20:47.989 14:39:56 -- common/autotest_common.sh@940 -- # kill -0 68357 00:20:47.989 14:39:56 -- common/autotest_common.sh@941 -- # uname 00:20:47.989 14:39:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:47.989 14:39:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68357 00:20:47.989 14:39:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:47.989 14:39:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:47.989 14:39:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68357' 00:20:47.989 killing process with pid 68357 00:20:47.989 14:39:56 -- common/autotest_common.sh@955 -- # kill 68357 00:20:47.989 14:39:56 -- common/autotest_common.sh@960 -- # wait 68357 00:20:49.364 14:39:57 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:20:49.364 00:20:49.364 real 0m14.303s 00:20:49.364 user 0m19.104s 00:20:49.364 sys 0m5.456s 00:20:49.364 14:39:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:49.364 14:39:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.364 ************************************ 00:20:49.364 END TEST bdev_nbd 00:20:49.364 ************************************ 00:20:49.364 14:39:57 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:20:49.364 14:39:57 -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:20:49.364 14:39:57 -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:20:49.364 skipping fio tests on NVMe due to multi-ns failures. 00:20:49.364 14:39:57 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:20:49.364 14:39:57 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:49.365 14:39:57 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:49.365 14:39:57 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:20:49.365 14:39:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:49.365 14:39:57 -- common/autotest_common.sh@10 -- # set +x 00:20:49.365 ************************************ 00:20:49.365 START TEST bdev_verify 00:20:49.365 ************************************ 00:20:49.365 14:39:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:49.624 [2024-04-17 14:39:58.075211] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:20:49.624 [2024-04-17 14:39:58.075614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68807 ] 00:20:49.882 [2024-04-17 14:39:58.265645] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:50.140 [2024-04-17 14:39:58.585200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.140 [2024-04-17 14:39:58.585211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.140 [2024-04-17 14:39:58.636683] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:20:51.074 [2024-04-17 14:39:59.326379] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:20:51.074 Running I/O for 5 seconds... 00:20:56.338 00:20:56.338 Latency(us) 00:20:56.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.338 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x0 length 0x5e800 00:20:56.338 Nvme0n1p1 : 5.08 1310.76 5.12 0.00 0.00 97342.97 19473.55 99365.06 00:20:56.338 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x5e800 length 0x5e800 00:20:56.338 Nvme0n1p1 : 5.05 1241.49 4.85 0.00 0.00 102792.37 20222.54 104857.60 00:20:56.338 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x0 length 0x5e7ff 00:20:56.338 Nvme0n1p2 : 5.08 1310.13 5.12 0.00 0.00 96901.10 22594.32 90876.59 00:20:56.338 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:20:56.338 Nvme0n1p2 : 5.05 1241.05 4.85 0.00 0.00 102676.56 23592.96 98366.42 00:20:56.338 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x0 length 0xa0000 00:20:56.338 Nvme1n1 : 5.08 1309.56 5.12 0.00 0.00 96766.23 25465.42 87381.33 00:20:56.338 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0xa0000 length 0xa0000 00:20:56.338 Nvme1n1 : 5.06 1240.68 4.85 0.00 0.00 102311.42 22469.49 91375.91 00:20:56.338 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x0 length 0x80000 00:20:56.338 Nvme2n1 : 5.08 1308.96 5.11 0.00 0.00 96603.50 28711.01 83386.76 00:20:56.338 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x80000 length 0x80000 00:20:56.338 Nvme2n1 : 5.07 1249.16 4.88 0.00 0.00 101365.55 4587.52 86882.01 00:20:56.338 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x0 length 0x80000 00:20:56.338 Nvme2n2 : 5.09 1307.75 5.11 0.00 0.00 96487.52 26713.72 83886.08 00:20:56.338 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x80000 length 0x80000 00:20:56.338 Nvme2n2 : 5.08 1258.91 4.92 0.00 0.00 100491.85 8800.55 92873.87 00:20:56.338 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x0 length 0x80000 00:20:56.338 Nvme2n3 : 5.11 1315.59 5.14 0.00 0.00 95808.46 4587.52 88379.98 00:20:56.338 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x80000 length 0x80000 00:20:56.338 Nvme2n3 : 5.09 1258.52 4.92 0.00 0.00 100274.33 8987.79 98366.42 00:20:56.338 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x0 length 0x20000 00:20:56.338 Nvme3n1 : 5.12 1324.54 5.17 0.00 0.00 95087.26 9861.61 93373.20 00:20:56.338 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:56.338 Verification LBA range: start 0x20000 length 0x20000 00:20:56.338 Nvme3n1 : 5.09 1257.53 4.91 0.00 0.00 100090.01 10111.27 103858.96 00:20:56.338 =================================================================================================================== 00:20:56.338 Total : 17934.64 70.06 0.00 0.00 98854.61 4587.52 104857.60 00:20:58.258 00:20:58.258 real 0m8.403s 00:20:58.258 user 0m14.946s 00:20:58.258 sys 0m0.357s 00:20:58.258 14:40:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:58.258 ************************************ 00:20:58.258 END TEST bdev_verify 00:20:58.258 ************************************ 00:20:58.258 14:40:06 -- common/autotest_common.sh@10 -- # set +x 00:20:58.258 14:40:06 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:58.258 14:40:06 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:20:58.258 14:40:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:58.258 14:40:06 -- common/autotest_common.sh@10 -- # set +x 00:20:58.258 ************************************ 00:20:58.258 START TEST bdev_verify_big_io 00:20:58.258 ************************************ 00:20:58.258 14:40:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:58.258 [2024-04-17 14:40:06.620569] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:20:58.258 [2024-04-17 14:40:06.620929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68916 ] 00:20:58.258 [2024-04-17 14:40:06.810386] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:58.824 [2024-04-17 14:40:07.148730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.824 [2024-04-17 14:40:07.148762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.824 [2024-04-17 14:40:07.199009] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:20:59.392 [2024-04-17 14:40:07.928755] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:20:59.651 Running I/O for 5 seconds... 00:21:06.210 00:21:06.210 Latency(us) 00:21:06.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.210 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:06.210 Verification LBA range: start 0x0 length 0x5e80 00:21:06.210 Nvme0n1p1 : 5.71 117.44 7.34 0.00 0.00 1029041.15 27837.20 1182394.27 00:21:06.210 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:06.210 Verification LBA range: start 0x5e80 length 0x5e80 00:21:06.210 Nvme0n1p1 : 5.69 120.91 7.56 0.00 0.00 1014687.52 27088.21 1294242.38 00:21:06.210 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:06.210 Verification LBA range: start 0x0 length 0x5e7f 00:21:06.210 Nvme0n1p2 : 5.72 123.13 7.70 0.00 0.00 980333.80 94371.84 1002638.38 00:21:06.210 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:06.211 Verification LBA range: start 0x5e7f length 0x5e7f 00:21:06.211 Nvme0n1p2 : 5.84 117.95 7.37 0.00 0.00 983698.21 116342.00 1086524.46 00:21:06.211 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:06.211 Verification LBA range: start 0x0 length 0xa000 00:21:06.211 Nvme1n1 : 6.07 79.12 4.95 0.00 0.00 1473015.22 138811.49 1877450.36 00:21:06.211 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:06.211 Verification LBA range: start 0xa000 length 0xa000 00:21:06.211 Nvme1n1 : 5.96 116.15 7.26 0.00 0.00 971643.63 147799.28 1533916.89 00:21:06.211 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:06.211 Verification LBA range: start 0x0 length 0x8000 00:21:06.211 Nvme2n1 : 5.85 125.24 7.83 0.00 0.00 906561.83 98366.42 926741.46 00:21:06.211 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:06.211 Verification LBA range: start 0x8000 length 0x8000 00:21:06.211 Nvme2n1 : 6.02 124.01 7.75 0.00 0.00 897406.29 62664.90 1549895.19 00:21:06.211 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:06.211 Verification LBA range: start 0x0 length 0x8000 00:21:06.211 Nvme2n2 : 6.05 132.62 8.29 0.00 0.00 834657.14 51679.82 946714.33 00:21:06.211 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:06.211 Verification LBA range: start 0x8000 length 0x8000 00:21:06.211 Nvme2n2 : 6.07 133.86 8.37 0.00 0.00 809328.86 38447.79 1118481.07 00:21:06.211 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:06.211 Verification LBA range: start 0x0 length 0x8000 00:21:06.211 Nvme2n3 : 6.06 137.36 8.58 0.00 0.00 786931.17 31956.60 966687.21 00:21:06.211 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:06.211 Verification LBA range: start 0x8000 length 0x8000 00:21:06.211 Nvme2n3 : 6.13 137.75 8.61 0.00 0.00 762853.98 19598.38 1661743.30 00:21:06.211 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:06.211 Verification LBA range: start 0x0 length 0x2000 00:21:06.211 Nvme3n1 : 6.07 147.57 9.22 0.00 0.00 712919.77 5305.30 986660.08 00:21:06.211 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:06.211 Verification LBA range: start 0x2000 length 0x2000 00:21:06.211 Nvme3n1 : 6.22 156.30 9.77 0.00 0.00 659538.59 1997.29 1701689.05 00:21:06.211 =================================================================================================================== 00:21:06.211 Total : 1769.42 110.59 0.00 0.00 887687.17 1997.29 1877450.36 00:21:08.113 00:21:08.113 real 0m10.058s 00:21:08.113 user 0m18.306s 00:21:08.113 sys 0m0.397s 00:21:08.113 14:40:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:08.113 ************************************ 00:21:08.113 END TEST bdev_verify_big_io 00:21:08.113 ************************************ 00:21:08.113 14:40:16 -- common/autotest_common.sh@10 -- # set +x 00:21:08.113 14:40:16 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:08.113 14:40:16 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:21:08.113 14:40:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:08.113 14:40:16 -- common/autotest_common.sh@10 -- # set +x 00:21:08.113 ************************************ 00:21:08.113 START TEST bdev_write_zeroes 00:21:08.113 ************************************ 00:21:08.113 14:40:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:08.372 [2024-04-17 14:40:16.850241] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:08.372 [2024-04-17 14:40:16.850543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69046 ] 00:21:08.631 [2024-04-17 14:40:17.017363] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.890 [2024-04-17 14:40:17.272691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.890 [2024-04-17 14:40:17.322729] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:21:09.456 [2024-04-17 14:40:18.023412] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:21:09.713 Running I/O for 1 seconds... 00:21:10.650 00:21:10.650 Latency(us) 00:21:10.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.650 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.650 Nvme0n1p1 : 1.02 6956.82 27.18 0.00 0.00 18312.04 8426.06 32206.26 00:21:10.650 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.650 Nvme0n1p2 : 1.02 6945.26 27.13 0.00 0.00 18308.08 13481.69 25340.59 00:21:10.650 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.650 Nvme1n1 : 1.02 6934.85 27.09 0.00 0.00 18275.59 13169.62 24466.77 00:21:10.650 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.650 Nvme2n1 : 1.03 6973.36 27.24 0.00 0.00 18153.94 8862.96 25590.25 00:21:10.650 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.650 Nvme2n2 : 1.03 6962.78 27.20 0.00 0.00 18151.58 9611.95 24966.10 00:21:10.650 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.650 Nvme2n3 : 1.03 6952.49 27.16 0.00 0.00 18147.17 9424.70 24966.10 00:21:10.650 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:10.650 Nvme3n1 : 1.03 6942.19 27.12 0.00 0.00 18145.72 9362.29 24841.26 00:21:10.650 =================================================================================================================== 00:21:10.650 Total : 48667.74 190.11 0.00 0.00 18213.12 8426.06 32206.26 00:21:12.551 00:21:12.551 real 0m3.972s 00:21:12.551 user 0m3.525s 00:21:12.551 sys 0m0.319s 00:21:12.551 14:40:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:12.551 14:40:20 -- common/autotest_common.sh@10 -- # set +x 00:21:12.551 ************************************ 00:21:12.551 END TEST bdev_write_zeroes 00:21:12.551 ************************************ 00:21:12.551 14:40:20 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:12.551 14:40:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:21:12.551 14:40:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:12.551 14:40:20 -- common/autotest_common.sh@10 -- # set +x 00:21:12.551 ************************************ 00:21:12.551 START TEST bdev_json_nonenclosed 00:21:12.551 ************************************ 00:21:12.551 14:40:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:12.551 [2024-04-17 14:40:20.883481] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:12.551 [2024-04-17 14:40:20.883840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69114 ] 00:21:12.551 [2024-04-17 14:40:21.052305] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.809 [2024-04-17 14:40:21.386628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.809 [2024-04-17 14:40:21.386969] json_config.c: 582:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:12.809 [2024-04-17 14:40:21.387125] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:12.809 [2024-04-17 14:40:21.387180] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:13.424 00:21:13.424 real 0m1.066s 00:21:13.424 user 0m0.800s 00:21:13.424 sys 0m0.158s 00:21:13.424 ************************************ 00:21:13.424 END TEST bdev_json_nonenclosed 00:21:13.424 ************************************ 00:21:13.424 14:40:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:13.424 14:40:21 -- common/autotest_common.sh@10 -- # set +x 00:21:13.424 14:40:21 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:13.424 14:40:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:21:13.424 14:40:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:13.424 14:40:21 -- common/autotest_common.sh@10 -- # set +x 00:21:13.424 ************************************ 00:21:13.424 START TEST bdev_json_nonarray 00:21:13.424 ************************************ 00:21:13.424 14:40:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:13.682 [2024-04-17 14:40:22.107701] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:13.682 [2024-04-17 14:40:22.108118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69156 ] 00:21:13.941 [2024-04-17 14:40:22.292378] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.200 [2024-04-17 14:40:22.608302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.200 [2024-04-17 14:40:22.608637] json_config.c: 588:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:14.200 [2024-04-17 14:40:22.608826] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:14.200 [2024-04-17 14:40:22.608867] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:14.765 ************************************ 00:21:14.765 END TEST bdev_json_nonarray 00:21:14.765 ************************************ 00:21:14.765 00:21:14.765 real 0m1.110s 00:21:14.765 user 0m0.821s 00:21:14.765 sys 0m0.177s 00:21:14.765 14:40:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:14.765 14:40:23 -- common/autotest_common.sh@10 -- # set +x 00:21:14.765 14:40:23 -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:21:14.765 14:40:23 -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:21:14.765 14:40:23 -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:21:14.765 14:40:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:14.765 14:40:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:14.765 14:40:23 -- common/autotest_common.sh@10 -- # set +x 00:21:14.765 ************************************ 00:21:14.765 START TEST bdev_gpt_uuid 00:21:14.765 ************************************ 00:21:14.765 14:40:23 -- common/autotest_common.sh@1111 -- # bdev_gpt_uuid 00:21:14.765 14:40:23 -- bdev/blockdev.sh@614 -- # local bdev 00:21:14.765 14:40:23 -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:21:14.765 14:40:23 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69191 00:21:14.765 14:40:23 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:14.765 14:40:23 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:14.765 14:40:23 -- bdev/blockdev.sh@49 -- # waitforlisten 69191 00:21:14.765 14:40:23 -- common/autotest_common.sh@817 -- # '[' -z 69191 ']' 00:21:14.765 14:40:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.766 14:40:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:14.766 14:40:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.766 14:40:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:14.766 14:40:23 -- common/autotest_common.sh@10 -- # set +x 00:21:14.766 [2024-04-17 14:40:23.364354] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:14.766 [2024-04-17 14:40:23.364706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69191 ] 00:21:15.023 [2024-04-17 14:40:23.541240] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.281 [2024-04-17 14:40:23.788884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.282 14:40:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:16.282 14:40:24 -- common/autotest_common.sh@850 -- # return 0 00:21:16.282 14:40:24 -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:16.282 14:40:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.282 14:40:24 -- common/autotest_common.sh@10 -- # set +x 00:21:16.540 Some configs were skipped because the RPC state that can call them passed over. 00:21:16.540 14:40:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.540 14:40:25 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:21:16.540 14:40:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.540 14:40:25 -- common/autotest_common.sh@10 -- # set +x 00:21:16.540 14:40:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.540 14:40:25 -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:21:16.540 14:40:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.540 14:40:25 -- common/autotest_common.sh@10 -- # set +x 00:21:16.799 14:40:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.799 14:40:25 -- bdev/blockdev.sh@621 -- # bdev='[ 00:21:16.799 { 00:21:16.799 "name": "Nvme0n1p1", 00:21:16.799 "aliases": [ 00:21:16.799 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:21:16.799 ], 00:21:16.799 "product_name": "GPT Disk", 00:21:16.799 "block_size": 4096, 00:21:16.799 "num_blocks": 774144, 00:21:16.799 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:21:16.799 "md_size": 64, 00:21:16.799 "md_interleave": false, 00:21:16.799 "dif_type": 0, 00:21:16.799 "assigned_rate_limits": { 00:21:16.799 "rw_ios_per_sec": 0, 00:21:16.799 "rw_mbytes_per_sec": 0, 00:21:16.799 "r_mbytes_per_sec": 0, 00:21:16.799 "w_mbytes_per_sec": 0 00:21:16.799 }, 00:21:16.799 "claimed": false, 00:21:16.799 "zoned": false, 00:21:16.799 "supported_io_types": { 00:21:16.799 "read": true, 00:21:16.799 "write": true, 00:21:16.799 "unmap": true, 00:21:16.799 "write_zeroes": true, 00:21:16.799 "flush": true, 00:21:16.799 "reset": true, 00:21:16.799 "compare": true, 00:21:16.799 "compare_and_write": false, 00:21:16.799 "abort": true, 00:21:16.799 "nvme_admin": false, 00:21:16.799 "nvme_io": false 00:21:16.799 }, 00:21:16.799 "driver_specific": { 00:21:16.799 "gpt": { 00:21:16.799 "base_bdev": "Nvme0n1", 00:21:16.799 "offset_blocks": 256, 00:21:16.799 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:21:16.799 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:21:16.799 "partition_name": "SPDK_TEST_first" 00:21:16.799 } 00:21:16.799 } 00:21:16.799 } 00:21:16.799 ]' 00:21:16.799 14:40:25 -- bdev/blockdev.sh@622 -- # jq -r length 00:21:16.799 14:40:25 -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:21:16.799 14:40:25 -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:21:16.799 14:40:25 -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:21:16.799 14:40:25 -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:21:16.799 14:40:25 -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:21:16.799 14:40:25 -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:21:16.799 14:40:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.799 14:40:25 -- common/autotest_common.sh@10 -- # set +x 00:21:16.799 14:40:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.799 14:40:25 -- bdev/blockdev.sh@626 -- # bdev='[ 00:21:16.799 { 00:21:16.799 "name": "Nvme0n1p2", 00:21:16.799 "aliases": [ 00:21:16.799 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:21:16.799 ], 00:21:16.799 "product_name": "GPT Disk", 00:21:16.799 "block_size": 4096, 00:21:16.799 "num_blocks": 774143, 00:21:16.799 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:21:16.799 "md_size": 64, 00:21:16.799 "md_interleave": false, 00:21:16.799 "dif_type": 0, 00:21:16.799 "assigned_rate_limits": { 00:21:16.799 "rw_ios_per_sec": 0, 00:21:16.799 "rw_mbytes_per_sec": 0, 00:21:16.799 "r_mbytes_per_sec": 0, 00:21:16.799 "w_mbytes_per_sec": 0 00:21:16.799 }, 00:21:16.799 "claimed": false, 00:21:16.799 "zoned": false, 00:21:16.799 "supported_io_types": { 00:21:16.799 "read": true, 00:21:16.799 "write": true, 00:21:16.799 "unmap": true, 00:21:16.799 "write_zeroes": true, 00:21:16.799 "flush": true, 00:21:16.799 "reset": true, 00:21:16.799 "compare": true, 00:21:16.799 "compare_and_write": false, 00:21:16.799 "abort": true, 00:21:16.799 "nvme_admin": false, 00:21:16.799 "nvme_io": false 00:21:16.799 }, 00:21:16.799 "driver_specific": { 00:21:16.799 "gpt": { 00:21:16.799 "base_bdev": "Nvme0n1", 00:21:16.799 "offset_blocks": 774400, 00:21:16.799 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:21:16.799 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:21:16.799 "partition_name": "SPDK_TEST_second" 00:21:16.799 } 00:21:16.799 } 00:21:16.799 } 00:21:16.799 ]' 00:21:16.799 14:40:25 -- bdev/blockdev.sh@627 -- # jq -r length 00:21:16.799 14:40:25 -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:21:16.799 14:40:25 -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:21:17.058 14:40:25 -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:21:17.058 14:40:25 -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:21:17.058 14:40:25 -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:21:17.058 14:40:25 -- bdev/blockdev.sh@631 -- # killprocess 69191 00:21:17.058 14:40:25 -- common/autotest_common.sh@936 -- # '[' -z 69191 ']' 00:21:17.058 14:40:25 -- common/autotest_common.sh@940 -- # kill -0 69191 00:21:17.058 14:40:25 -- common/autotest_common.sh@941 -- # uname 00:21:17.058 14:40:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:17.058 14:40:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69191 00:21:17.058 14:40:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:17.058 14:40:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:17.058 14:40:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69191' 00:21:17.058 killing process with pid 69191 00:21:17.059 14:40:25 -- common/autotest_common.sh@955 -- # kill 69191 00:21:17.059 14:40:25 -- common/autotest_common.sh@960 -- # wait 69191 00:21:19.594 ************************************ 00:21:19.594 END TEST bdev_gpt_uuid 00:21:19.594 ************************************ 00:21:19.594 00:21:19.594 real 0m4.951s 00:21:19.594 user 0m5.111s 00:21:19.594 sys 0m0.552s 00:21:19.594 14:40:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:19.594 14:40:28 -- common/autotest_common.sh@10 -- # set +x 00:21:19.853 14:40:28 -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:21:19.853 14:40:28 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:21:19.853 14:40:28 -- bdev/blockdev.sh@811 -- # cleanup 00:21:19.853 14:40:28 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:19.853 14:40:28 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:19.853 14:40:28 -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:21:19.853 14:40:28 -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:21:19.853 14:40:28 -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:21:19.853 14:40:28 -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:20.111 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:20.369 Waiting for block devices as requested 00:21:20.369 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:20.628 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:20.628 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:21:20.628 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:21:25.892 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:21:25.892 14:40:34 -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:21:25.892 14:40:34 -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:21:26.150 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:21:26.150 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:21:26.150 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:21:26.150 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:21:26.150 14:40:34 -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:21:26.150 ************************************ 00:21:26.150 END TEST blockdev_nvme_gpt 00:21:26.150 ************************************ 00:21:26.150 00:21:26.150 real 1m13.031s 00:21:26.150 user 1m30.412s 00:21:26.150 sys 0m12.189s 00:21:26.150 14:40:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:26.150 14:40:34 -- common/autotest_common.sh@10 -- # set +x 00:21:26.151 14:40:34 -- spdk/autotest.sh@211 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:21:26.151 14:40:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:26.151 14:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:26.151 14:40:34 -- common/autotest_common.sh@10 -- # set +x 00:21:26.151 ************************************ 00:21:26.151 START TEST nvme 00:21:26.151 ************************************ 00:21:26.151 14:40:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:21:26.409 * Looking for test storage... 00:21:26.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:26.409 14:40:34 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:26.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:27.543 lsblk: /dev/nvme3c3n1: not a block device 00:21:27.543 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:27.543 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:27.543 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:27.801 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:27.801 14:40:36 -- nvme/nvme.sh@79 -- # uname 00:21:27.801 14:40:36 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:21:27.801 14:40:36 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:21:27.801 14:40:36 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:21:27.801 14:40:36 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:21:27.801 14:40:36 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:21:27.801 14:40:36 -- common/autotest_common.sh@1055 -- # echo 0 00:21:27.801 Waiting for stub to ready for secondary processes... 00:21:27.801 14:40:36 -- common/autotest_common.sh@1057 -- # stubpid=69876 00:21:27.801 14:40:36 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:21:27.801 14:40:36 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:21:27.801 14:40:36 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:21:27.801 14:40:36 -- common/autotest_common.sh@1061 -- # [[ -e /proc/69876 ]] 00:21:27.801 14:40:36 -- common/autotest_common.sh@1062 -- # sleep 1s 00:21:27.801 [2024-04-17 14:40:36.363733] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:21:27.801 [2024-04-17 14:40:36.364122] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:21:28.739 14:40:37 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:21:28.739 14:40:37 -- common/autotest_common.sh@1061 -- # [[ -e /proc/69876 ]] 00:21:28.739 14:40:37 -- common/autotest_common.sh@1062 -- # sleep 1s 00:21:28.997 [2024-04-17 14:40:37.405430] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:29.256 [2024-04-17 14:40:37.710240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.256 [2024-04-17 14:40:37.710383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.256 [2024-04-17 14:40:37.710406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.256 [2024-04-17 14:40:37.711342] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:21:29.256 [2024-04-17 14:40:37.711699] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:21:29.256 [2024-04-17 14:40:37.731098] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:21:29.256 [2024-04-17 14:40:37.731431] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:21:29.256 [2024-04-17 14:40:37.745683] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:21:29.256 [2024-04-17 14:40:37.746090] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:21:29.256 [2024-04-17 14:40:37.754096] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:21:29.256 [2024-04-17 14:40:37.754555] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:21:29.256 [2024-04-17 14:40:37.754885] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:21:29.256 [2024-04-17 14:40:37.763209] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:21:29.256 [2024-04-17 14:40:37.763597] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:21:29.256 [2024-04-17 14:40:37.763836] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:21:29.256 [2024-04-17 14:40:37.769947] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:21:29.256 [2024-04-17 14:40:37.770317] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:21:29.256 [2024-04-17 14:40:37.770551] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:21:29.256 [2024-04-17 14:40:37.770754] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:21:29.256 [2024-04-17 14:40:37.770954] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:21:29.823 done. 00:21:29.823 14:40:38 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:21:29.823 14:40:38 -- common/autotest_common.sh@1064 -- # echo done. 00:21:29.823 14:40:38 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:21:29.823 14:40:38 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:21:29.823 14:40:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:29.823 14:40:38 -- common/autotest_common.sh@10 -- # set +x 00:21:29.823 ************************************ 00:21:29.823 START TEST nvme_reset 00:21:29.823 ************************************ 00:21:29.823 14:40:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:21:30.082 Initializing NVMe Controllers 00:21:30.082 Skipping QEMU NVMe SSD at 0000:00:10.0 00:21:30.082 Skipping QEMU NVMe SSD at 0000:00:11.0 00:21:30.082 Skipping QEMU NVMe SSD at 0000:00:13.0 00:21:30.082 Skipping QEMU NVMe SSD at 0000:00:12.0 00:21:30.082 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:21:30.341 ************************************ 00:21:30.341 END TEST nvme_reset 00:21:30.341 ************************************ 00:21:30.341 00:21:30.341 real 0m0.313s 00:21:30.341 user 0m0.125s 00:21:30.341 sys 0m0.152s 00:21:30.341 14:40:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:30.341 14:40:38 -- common/autotest_common.sh@10 -- # set +x 00:21:30.341 14:40:38 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:21:30.341 14:40:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:30.341 14:40:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:30.341 14:40:38 -- common/autotest_common.sh@10 -- # set +x 00:21:30.341 ************************************ 00:21:30.341 START TEST nvme_identify 00:21:30.341 ************************************ 00:21:30.341 14:40:38 -- common/autotest_common.sh@1111 -- # nvme_identify 00:21:30.341 14:40:38 -- nvme/nvme.sh@12 -- # bdfs=() 00:21:30.341 14:40:38 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:21:30.341 14:40:38 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:21:30.341 14:40:38 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:21:30.341 14:40:38 -- common/autotest_common.sh@1499 -- # bdfs=() 00:21:30.341 14:40:38 -- common/autotest_common.sh@1499 -- # local bdfs 00:21:30.341 14:40:38 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:30.341 14:40:38 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:30.341 14:40:38 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:21:30.341 14:40:38 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:21:30.341 14:40:38 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:21:30.341 14:40:38 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:21:30.603 [2024-04-17 14:40:39.175657] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69913 terminated unexpected 00:21:30.603 ===================================================== 00:21:30.603 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:30.603 ===================================================== 00:21:30.603 Controller Capabilities/Features 00:21:30.603 ================================ 00:21:30.603 Vendor ID: 1b36 00:21:30.603 Subsystem Vendor ID: 1af4 00:21:30.603 Serial Number: 12340 00:21:30.603 Model Number: QEMU NVMe Ctrl 00:21:30.603 Firmware Version: 8.0.0 00:21:30.603 Recommended Arb Burst: 6 00:21:30.603 IEEE OUI Identifier: 00 54 52 00:21:30.603 Multi-path I/O 00:21:30.603 May have multiple subsystem ports: No 00:21:30.603 May have multiple controllers: No 00:21:30.603 Associated with SR-IOV VF: No 00:21:30.603 Max Data Transfer Size: 524288 00:21:30.603 Max Number of Namespaces: 256 00:21:30.603 Max Number of I/O Queues: 64 00:21:30.603 NVMe Specification Version (VS): 1.4 00:21:30.603 NVMe Specification Version (Identify): 1.4 00:21:30.603 Maximum Queue Entries: 2048 00:21:30.603 Contiguous Queues Required: Yes 00:21:30.603 Arbitration Mechanisms Supported 00:21:30.603 Weighted Round Robin: Not Supported 00:21:30.603 Vendor Specific: Not Supported 00:21:30.603 Reset Timeout: 7500 ms 00:21:30.603 Doorbell Stride: 4 bytes 00:21:30.603 NVM Subsystem Reset: Not Supported 00:21:30.603 Command Sets Supported 00:21:30.603 NVM Command Set: Supported 00:21:30.603 Boot Partition: Not Supported 00:21:30.603 Memory Page Size Minimum: 4096 bytes 00:21:30.603 Memory Page Size Maximum: 65536 bytes 00:21:30.603 Persistent Memory Region: Not Supported 00:21:30.603 Optional Asynchronous Events Supported 00:21:30.603 Namespace Attribute Notices: Supported 00:21:30.603 Firmware Activation Notices: Not Supported 00:21:30.603 ANA Change Notices: Not Supported 00:21:30.603 PLE Aggregate Log Change Notices: Not Supported 00:21:30.603 LBA Status Info Alert Notices: Not Supported 00:21:30.603 EGE Aggregate Log Change Notices: Not Supported 00:21:30.603 Normal NVM Subsystem Shutdown event: Not Supported 00:21:30.603 Zone Descriptor Change Notices: Not Supported 00:21:30.603 Discovery Log Change Notices: Not Supported 00:21:30.603 Controller Attributes 00:21:30.603 128-bit Host Identifier: Not Supported 00:21:30.603 Non-Operational Permissive Mode: Not Supported 00:21:30.603 NVM Sets: Not Supported 00:21:30.603 Read Recovery Levels: Not Supported 00:21:30.603 Endurance Groups: Not Supported 00:21:30.603 Predictable Latency Mode: Not Supported 00:21:30.603 Traffic Based Keep ALive: Not Supported 00:21:30.603 Namespace Granularity: Not Supported 00:21:30.603 SQ Associations: Not Supported 00:21:30.603 UUID List: Not Supported 00:21:30.603 Multi-Domain Subsystem: Not Supported 00:21:30.603 Fixed Capacity Management: Not Supported 00:21:30.603 Variable Capacity Management: Not Supported 00:21:30.603 Delete Endurance Group: Not Supported 00:21:30.603 Delete NVM Set: Not Supported 00:21:30.603 Extended LBA Formats Supported: Supported 00:21:30.603 Flexible Data Placement Supported: Not Supported 00:21:30.603 00:21:30.603 Controller Memory Buffer Support 00:21:30.603 ================================ 00:21:30.603 Supported: No 00:21:30.603 00:21:30.603 Persistent Memory Region Support 00:21:30.603 ================================ 00:21:30.603 Supported: No 00:21:30.603 00:21:30.603 Admin Command Set Attributes 00:21:30.603 ============================ 00:21:30.603 Security Send/Receive: Not Supported 00:21:30.603 Format NVM: Supported 00:21:30.603 Firmware Activate/Download: Not Supported 00:21:30.603 Namespace Management: Supported 00:21:30.603 Device Self-Test: Not Supported 00:21:30.603 Directives: Supported 00:21:30.603 NVMe-MI: Not Supported 00:21:30.603 Virtualization Management: Not Supported 00:21:30.603 Doorbell Buffer Config: Supported 00:21:30.603 Get LBA Status Capability: Not Supported 00:21:30.603 Command & Feature Lockdown Capability: Not Supported 00:21:30.603 Abort Command Limit: 4 00:21:30.603 Async Event Request Limit: 4 00:21:30.603 Number of Firmware Slots: N/A 00:21:30.603 Firmware Slot 1 Read-Only: N/A 00:21:30.603 Firmware Activation Without Reset: N/A 00:21:30.603 Multiple Update Detection Support: N/A 00:21:30.603 Firmware Update Granularity: No Information Provided 00:21:30.603 Per-Namespace SMART Log: Yes 00:21:30.603 Asymmetric Namespace Access Log Page: Not Supported 00:21:30.603 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:21:30.603 Command Effects Log Page: Supported 00:21:30.603 Get Log Page Extended Data: Supported 00:21:30.603 Telemetry Log Pages: Not Supported 00:21:30.603 Persistent Event Log Pages: Not Supported 00:21:30.603 Supported Log Pages Log Page: May Support 00:21:30.603 Commands Supported & Effects Log Page: Not Supported 00:21:30.603 Feature Identifiers & Effects Log Page:May Support 00:21:30.603 NVMe-MI Commands & Effects Log Page: May Support 00:21:30.603 Data Area 4 for Telemetry Log: Not Supported 00:21:30.603 Error Log Page Entries Supported: 1 00:21:30.603 Keep Alive: Not Supported 00:21:30.603 00:21:30.603 NVM Command Set Attributes 00:21:30.603 ========================== 00:21:30.603 Submission Queue Entry Size 00:21:30.603 Max: 64 00:21:30.603 Min: 64 00:21:30.603 Completion Queue Entry Size 00:21:30.603 Max: 16 00:21:30.603 Min: 16 00:21:30.603 Number of Namespaces: 256 00:21:30.603 Compare Command: Supported 00:21:30.603 Write Uncorrectable Command: Not Supported 00:21:30.603 Dataset Management Command: Supported 00:21:30.603 Write Zeroes Command: Supported 00:21:30.603 Set Features Save Field: Supported 00:21:30.603 Reservations: Not Supported 00:21:30.603 Timestamp: Supported 00:21:30.603 Copy: Supported 00:21:30.603 Volatile Write Cache: Present 00:21:30.603 Atomic Write Unit (Normal): 1 00:21:30.603 Atomic Write Unit (PFail): 1 00:21:30.603 Atomic Compare & Write Unit: 1 00:21:30.603 Fused Compare & Write: Not Supported 00:21:30.603 Scatter-Gather List 00:21:30.603 SGL Command Set: Supported 00:21:30.603 SGL Keyed: Not Supported 00:21:30.603 SGL Bit Bucket Descriptor: Not Supported 00:21:30.603 SGL Metadata Pointer: Not Supported 00:21:30.603 Oversized SGL: Not Supported 00:21:30.603 SGL Metadata Address: Not Supported 00:21:30.603 SGL Offset: Not Supported 00:21:30.603 Transport SGL Data Block: Not Supported 00:21:30.603 Replay Protected Memory Block: Not Supported 00:21:30.603 00:21:30.603 Firmware Slot Information 00:21:30.603 ========================= 00:21:30.603 Active slot: 1 00:21:30.603 Slot 1 Firmware Revision: 1.0 00:21:30.603 00:21:30.603 00:21:30.603 Commands Supported and Effects 00:21:30.603 ============================== 00:21:30.603 Admin Commands 00:21:30.603 -------------- 00:21:30.603 Delete I/O Submission Queue (00h): Supported 00:21:30.603 Create I/O Submission Queue (01h): Supported 00:21:30.603 Get Log Page (02h): Supported 00:21:30.603 Delete I/O Completion Queue (04h): Supported 00:21:30.603 Create I/O Completion Queue (05h): Supported 00:21:30.603 Identify (06h): Supported 00:21:30.603 Abort (08h): Supported 00:21:30.603 Set Features (09h): Supported 00:21:30.603 Get Features (0Ah): Supported 00:21:30.603 Asynchronous Event Request (0Ch): Supported 00:21:30.603 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:30.603 Directive Send (19h): Supported 00:21:30.603 Directive Receive (1Ah): Supported 00:21:30.603 Virtualization Management (1Ch): Supported 00:21:30.603 Doorbell Buffer Config (7Ch): Supported 00:21:30.603 Format NVM (80h): Supported LBA-Change 00:21:30.603 I/O Commands 00:21:30.603 ------------ 00:21:30.603 Flush (00h): Supported LBA-Change 00:21:30.603 Write (01h): Supported LBA-Change 00:21:30.603 Read (02h): Supported 00:21:30.603 Compare (05h): Supported 00:21:30.603 Write Zeroes (08h): Supported LBA-Change 00:21:30.603 Dataset Management (09h): Supported LBA-Change 00:21:30.603 Unknown (0Ch): Supported 00:21:30.603 Unknown (12h): Supported 00:21:30.603 Copy (19h): Supported LBA-Change 00:21:30.603 Unknown (1Dh): Supported LBA-Change 00:21:30.603 00:21:30.603 Error Log 00:21:30.603 ========= 00:21:30.603 00:21:30.604 Arbitration 00:21:30.604 =========== 00:21:30.604 Arbitration Burst: no limit 00:21:30.604 00:21:30.604 Power Management 00:21:30.604 ================ 00:21:30.604 Number of Power States: 1 00:21:30.604 Current Power State: Power State #0 00:21:30.604 Power State #0: 00:21:30.604 Max Power: 25.00 W 00:21:30.604 Non-Operational State: Operational 00:21:30.604 Entry Latency: 16 microseconds 00:21:30.604 Exit Latency: 4 microseconds 00:21:30.604 Relative Read Throughput: 0 00:21:30.604 Relative Read Latency: 0 00:21:30.604 Relative Write Throughput: 0 00:21:30.604 Relative Write Latency: 0 00:21:30.604 Idle Power[2024-04-17 14:40:39.177561] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69913 terminated unexpected 00:21:30.604 : Not Reported 00:21:30.604 Active Power: Not Reported 00:21:30.604 Non-Operational Permissive Mode: Not Supported 00:21:30.604 00:21:30.604 Health Information 00:21:30.604 ================== 00:21:30.604 Critical Warnings: 00:21:30.604 Available Spare Space: OK 00:21:30.604 Temperature: OK 00:21:30.604 Device Reliability: OK 00:21:30.604 Read Only: No 00:21:30.604 Volatile Memory Backup: OK 00:21:30.604 Current Temperature: 323 Kelvin (50 Celsius) 00:21:30.604 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:30.604 Available Spare: 0% 00:21:30.604 Available Spare Threshold: 0% 00:21:30.604 Life Percentage Used: 0% 00:21:30.604 Data Units Read: 1050 00:21:30.604 Data Units Written: 882 00:21:30.604 Host Read Commands: 47535 00:21:30.604 Host Write Commands: 46022 00:21:30.604 Controller Busy Time: 0 minutes 00:21:30.604 Power Cycles: 0 00:21:30.604 Power On Hours: 0 hours 00:21:30.604 Unsafe Shutdowns: 0 00:21:30.604 Unrecoverable Media Errors: 0 00:21:30.604 Lifetime Error Log Entries: 0 00:21:30.604 Warning Temperature Time: 0 minutes 00:21:30.604 Critical Temperature Time: 0 minutes 00:21:30.604 00:21:30.604 Number of Queues 00:21:30.604 ================ 00:21:30.604 Number of I/O Submission Queues: 64 00:21:30.604 Number of I/O Completion Queues: 64 00:21:30.604 00:21:30.604 ZNS Specific Controller Data 00:21:30.604 ============================ 00:21:30.604 Zone Append Size Limit: 0 00:21:30.604 00:21:30.604 00:21:30.604 Active Namespaces 00:21:30.604 ================= 00:21:30.604 Namespace ID:1 00:21:30.604 Error Recovery Timeout: Unlimited 00:21:30.604 Command Set Identifier: NVM (00h) 00:21:30.604 Deallocate: Supported 00:21:30.604 Deallocated/Unwritten Error: Supported 00:21:30.604 Deallocated Read Value: All 0x00 00:21:30.604 Deallocate in Write Zeroes: Not Supported 00:21:30.604 Deallocated Guard Field: 0xFFFF 00:21:30.604 Flush: Supported 00:21:30.604 Reservation: Not Supported 00:21:30.604 Metadata Transferred as: Separate Metadata Buffer 00:21:30.604 Namespace Sharing Capabilities: Private 00:21:30.604 Size (in LBAs): 1548666 (5GiB) 00:21:30.604 Capacity (in LBAs): 1548666 (5GiB) 00:21:30.604 Utilization (in LBAs): 1548666 (5GiB) 00:21:30.604 Thin Provisioning: Not Supported 00:21:30.604 Per-NS Atomic Units: No 00:21:30.604 Maximum Single Source Range Length: 128 00:21:30.604 Maximum Copy Length: 128 00:21:30.604 Maximum Source Range Count: 128 00:21:30.604 NGUID/EUI64 Never Reused: No 00:21:30.604 Namespace Write Protected: No 00:21:30.604 Number of LBA Formats: 8 00:21:30.604 Current LBA Format: LBA Format #07 00:21:30.604 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:30.604 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:30.604 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:30.604 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:30.604 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:30.604 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:30.604 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:30.604 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:30.604 00:21:30.604 ===================================================== 00:21:30.604 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:30.604 ===================================================== 00:21:30.604 Controller Capabilities/Features 00:21:30.604 ================================ 00:21:30.604 Vendor ID: 1b36 00:21:30.604 Subsystem Vendor ID: 1af4 00:21:30.604 Serial Number: 12341 00:21:30.604 Model Number: QEMU NVMe Ctrl 00:21:30.604 Firmware Version: 8.0.0 00:21:30.604 Recommended Arb Burst: 6 00:21:30.604 IEEE OUI Identifier: 00 54 52 00:21:30.604 Multi-path I/O 00:21:30.604 May have multiple subsystem ports: No 00:21:30.604 May have multiple controllers: No 00:21:30.604 Associated with SR-IOV VF: No 00:21:30.604 Max Data Transfer Size: 524288 00:21:30.604 Max Number of Namespaces: 256 00:21:30.604 Max Number of I/O Queues: 64 00:21:30.604 NVMe Specification Version (VS): 1.4 00:21:30.604 NVMe Specification Version (Identify): 1.4 00:21:30.604 Maximum Queue Entries: 2048 00:21:30.604 Contiguous Queues Required: Yes 00:21:30.604 Arbitration Mechanisms Supported 00:21:30.604 Weighted Round Robin: Not Supported 00:21:30.604 Vendor Specific: Not Supported 00:21:30.604 Reset Timeout: 7500 ms 00:21:30.604 Doorbell Stride: 4 bytes 00:21:30.604 NVM Subsystem Reset: Not Supported 00:21:30.604 Command Sets Supported 00:21:30.604 NVM Command Set: Supported 00:21:30.604 Boot Partition: Not Supported 00:21:30.604 Memory Page Size Minimum: 4096 bytes 00:21:30.604 Memory Page Size Maximum: 65536 bytes 00:21:30.604 Persistent Memory Region: Not Supported 00:21:30.604 Optional Asynchronous Events Supported 00:21:30.604 Namespace Attribute Notices: Supported 00:21:30.604 Firmware Activation Notices: Not Supported 00:21:30.604 ANA Change Notices: Not Supported 00:21:30.604 PLE Aggregate Log Change Notices: Not Supported 00:21:30.604 LBA Status Info Alert Notices: Not Supported 00:21:30.604 EGE Aggregate Log Change Notices: Not Supported 00:21:30.604 Normal NVM Subsystem Shutdown event: Not Supported 00:21:30.604 Zone Descriptor Change Notices: Not Supported 00:21:30.604 Discovery Log Change Notices: Not Supported 00:21:30.604 Controller Attributes 00:21:30.604 128-bit Host Identifier: Not Supported 00:21:30.604 Non-Operational Permissive Mode: Not Supported 00:21:30.604 NVM Sets: Not Supported 00:21:30.604 Read Recovery Levels: Not Supported 00:21:30.604 Endurance Groups: Not Supported 00:21:30.604 Predictable Latency Mode: Not Supported 00:21:30.604 Traffic Based Keep ALive: Not Supported 00:21:30.604 Namespace Granularity: Not Supported 00:21:30.604 SQ Associations: Not Supported 00:21:30.604 UUID List: Not Supported 00:21:30.604 Multi-Domain Subsystem: Not Supported 00:21:30.604 Fixed Capacity Management: Not Supported 00:21:30.604 Variable Capacity Management: Not Supported 00:21:30.604 Delete Endurance Group: Not Supported 00:21:30.604 Delete NVM Set: Not Supported 00:21:30.604 Extended LBA Formats Supported: Supported 00:21:30.604 Flexible Data Placement Supported: Not Supported 00:21:30.604 00:21:30.604 Controller Memory Buffer Support 00:21:30.604 ================================ 00:21:30.604 Supported: No 00:21:30.604 00:21:30.604 Persistent Memory Region Support 00:21:30.604 ================================ 00:21:30.604 Supported: No 00:21:30.604 00:21:30.604 Admin Command Set Attributes 00:21:30.604 ============================ 00:21:30.604 Security Send/Receive: Not Supported 00:21:30.604 Format NVM: Supported 00:21:30.604 Firmware Activate/Download: Not Supported 00:21:30.604 Namespace Management: Supported 00:21:30.604 Device Self-Test: Not Supported 00:21:30.604 Directives: Supported 00:21:30.604 NVMe-MI: Not Supported 00:21:30.604 Virtualization Management: Not Supported 00:21:30.604 Doorbell Buffer Config: Supported 00:21:30.604 Get LBA Status Capability: Not Supported 00:21:30.604 Command & Feature Lockdown Capability: Not Supported 00:21:30.604 Abort Command Limit: 4 00:21:30.604 Async Event Request Limit: 4 00:21:30.604 Number of Firmware Slots: N/A 00:21:30.604 Firmware Slot 1 Read-Only: N/A 00:21:30.604 Firmware Activation Without Reset: N/A 00:21:30.604 Multiple Update Detection Support: N/A 00:21:30.604 Firmware Update Granularity: No Information Provided 00:21:30.604 Per-Namespace SMART Log: Yes 00:21:30.604 Asymmetric Namespace Access Log Page: Not Supported 00:21:30.604 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:21:30.604 Command Effects Log Page: Supported 00:21:30.604 Get Log Page Extended Data: Supported 00:21:30.604 Telemetry Log Pages: Not Supported 00:21:30.604 Persistent Event Log Pages: Not Supported 00:21:30.604 Supported Log Pages Log Page: May Support 00:21:30.604 Commands Supported & Effects Log Page: Not Supported 00:21:30.604 Feature Identifiers & Effects Log Page:May Support 00:21:30.604 NVMe-MI Commands & Effects Log Page: May Support 00:21:30.604 Data Area 4 for Telemetry Log: Not Supported 00:21:30.604 Error Log Page Entries Supported: 1 00:21:30.605 Keep Alive: Not Supported 00:21:30.605 00:21:30.605 NVM Command Set Attributes 00:21:30.605 ========================== 00:21:30.605 Submission Queue Entry Size 00:21:30.605 Max: 64 00:21:30.605 Min: 64 00:21:30.605 Completion Queue Entry Size 00:21:30.605 Max: 16 00:21:30.605 Min: 16 00:21:30.605 Number of Namespaces: 256 00:21:30.605 Compare Command: Supported 00:21:30.605 Write Uncorrectable Command: Not Supported 00:21:30.605 Dataset Management Command: Supported 00:21:30.605 Write Zeroes Command: Supported 00:21:30.605 Set Features Save Field: Supported 00:21:30.605 Reservations: Not Supported 00:21:30.605 Timestamp: Supported 00:21:30.605 Copy: Supported 00:21:30.605 Volatile Write Cache: Present 00:21:30.605 Atomic Write Unit (Normal): 1 00:21:30.605 Atomic Write Unit (PFail): 1 00:21:30.605 Atomic Compare & Write Unit: 1 00:21:30.605 Fused Compare & Write: Not Supported 00:21:30.605 Scatter-Gather List 00:21:30.605 SGL Command Set: Supported 00:21:30.605 SGL Keyed: Not Supported 00:21:30.605 SGL Bit Bucket Descriptor: Not Supported 00:21:30.605 SGL Metadata Pointer: Not Supported 00:21:30.605 Oversized SGL: Not Supported 00:21:30.605 SGL Metadata Address: Not Supported 00:21:30.605 SGL Offset: Not Supported 00:21:30.605 Transport SGL Data Block: Not Supported 00:21:30.605 Replay Protected Memory Block: Not Supported 00:21:30.605 00:21:30.605 Firmware Slot Information 00:21:30.605 ========================= 00:21:30.605 Active slot: 1 00:21:30.605 Slot 1 Firmware Revision: 1.0 00:21:30.605 00:21:30.605 00:21:30.605 Commands Supported and Effects 00:21:30.605 ============================== 00:21:30.605 Admin Commands 00:21:30.605 -------------- 00:21:30.605 Delete I/O Submission Queue (00h): Supported 00:21:30.605 Create I/O Submission Queue (01h): Supported 00:21:30.605 Get Log Page (02h): Supported 00:21:30.605 Delete I/O Completion Queue (04h): Supported 00:21:30.605 Create I/O Completion Queue (05h): Supported 00:21:30.605 Identify (06h): Supported 00:21:30.605 Abort (08h): Supported 00:21:30.605 Set Features (09h): Supported 00:21:30.605 Get Features (0Ah): Supported 00:21:30.605 Asynchronous Event Request (0Ch): Supported 00:21:30.605 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:30.605 Directive Send (19h): Supported 00:21:30.605 Directive Receive (1Ah): Supported 00:21:30.605 Virtualization Management (1Ch): Supported 00:21:30.605 Doorbell Buffer Config (7Ch): Supported 00:21:30.605 Format NVM (80h): Supported LBA-Change 00:21:30.605 I/O Commands 00:21:30.605 ------------ 00:21:30.605 Flush (00h): Supported LBA-Change 00:21:30.605 Write (01h): Supported LBA-Change 00:21:30.605 Read (02h): Supported 00:21:30.605 Compare (05h): Supported 00:21:30.605 Write Zeroes (08h): Supported LBA-Change 00:21:30.605 Dataset Management (09h): Supported LBA-Change 00:21:30.605 Unknown (0Ch): Supported 00:21:30.605 Unknown (12h): Supported 00:21:30.605 Copy (19h): Supported LBA-Change 00:21:30.605 Unknown (1Dh): Supported LBA-Change 00:21:30.605 00:21:30.605 Error Log 00:21:30.605 ========= 00:21:30.605 00:21:30.605 Arbitration 00:21:30.605 =========== 00:21:30.605 Arbitration Burst: no limit 00:21:30.605 00:21:30.605 Power Management 00:21:30.605 ================ 00:21:30.605 Number of Power States: 1 00:21:30.605 Current Power State: Power State #0 00:21:30.605 Power State #0: 00:21:30.605 Max Power: 25.00 W 00:21:30.605 Non-Operational State: Operational 00:21:30.605 Entry Latency: 16 microseconds 00:21:30.605 Exit Latency: 4 microseconds 00:21:30.605 Relative Read Throughput: 0 00:21:30.605 Relative Read Latency: 0 00:21:30.605 Relative Write Throughput: 0 00:21:30.605 Relative Write Latency: 0 00:21:30.605 Idle Power: Not Reported 00:21:30.605 Active Power: Not Reported 00:21:30.605 Non-Operational Permissive Mode: Not Supported 00:21:30.605 00:21:30.605 Health Information 00:21:30.605 ================== 00:21:30.605 Critical Warnings: 00:21:30.605 Available Spare Space: OK 00:21:30.605 Temperature: OK 00:21:30.605 Device Reliability: OK 00:21:30.605 Read Only: No 00:21:30.605 Volatile Memory Backup: OK 00:21:30.605 Current Temperature: 323 Kelvin (50 Celsius) 00:21:30.605 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:30.605 Available Spare: 0% 00:21:30.605 Available Spare Threshold: 0% 00:21:30.605 Life Percentage Used: 0% 00:21:30.605 Data Units Read: 733 00:21:30.605 Data Units Written: 579 00:21:30.605 Host Read Commands: 33359 00:21:30.605 Host Write Commands: 31037 00:21:30.605 Controller Busy Time: 0 minutes 00:21:30.605 Power Cycles: 0 00:21:30.605 Power On Hours: 0 hours 00:21:30.605 Unsafe Shutdowns: 0 00:21:30.605 Unrecoverable Media Errors: 0 00:21:30.605 Lifetime Error Log Entries: 0 00:21:30.605 Warning Temperature Time: 0 minutes 00:21:30.605 Critical Temperature Time: 0 minutes 00:21:30.605 00:21:30.605 Number of Queues 00:21:30.605 ================ 00:21:30.605 Number of I/O Submission Queues: 64 00:21:30.605 Number of I/O Completion Queues: 64 00:21:30.605 00:21:30.605 ZNS Specific Controller Data 00:21:30.605 ============================ 00:21:30.605 Zone Append Size Limit: 0 00:21:30.605 00:21:30.605 00:21:30.605 Active Namespaces 00:21:30.605 ================= 00:21:30.605 Namespace ID:1 00:21:30.605 Error Recovery Timeout: Unlimited 00:21:30.605 Command Set Identifier: [2024-04-17 14:40:39.179058] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69913 terminated unexpected 00:21:30.605 NVM (00h) 00:21:30.605 Deallocate: Supported 00:21:30.605 Deallocated/Unwritten Error: Supported 00:21:30.605 Deallocated Read Value: All 0x00 00:21:30.605 Deallocate in Write Zeroes: Not Supported 00:21:30.605 Deallocated Guard Field: 0xFFFF 00:21:30.605 Flush: Supported 00:21:30.605 Reservation: Not Supported 00:21:30.605 Namespace Sharing Capabilities: Private 00:21:30.605 Size (in LBAs): 1310720 (5GiB) 00:21:30.605 Capacity (in LBAs): 1310720 (5GiB) 00:21:30.605 Utilization (in LBAs): 1310720 (5GiB) 00:21:30.605 Thin Provisioning: Not Supported 00:21:30.605 Per-NS Atomic Units: No 00:21:30.605 Maximum Single Source Range Length: 128 00:21:30.605 Maximum Copy Length: 128 00:21:30.605 Maximum Source Range Count: 128 00:21:30.605 NGUID/EUI64 Never Reused: No 00:21:30.605 Namespace Write Protected: No 00:21:30.605 Number of LBA Formats: 8 00:21:30.605 Current LBA Format: LBA Format #04 00:21:30.605 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:30.605 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:30.605 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:30.605 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:30.605 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:30.605 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:30.605 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:30.605 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:30.605 00:21:30.605 ===================================================== 00:21:30.605 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:30.605 ===================================================== 00:21:30.605 Controller Capabilities/Features 00:21:30.605 ================================ 00:21:30.605 Vendor ID: 1b36 00:21:30.605 Subsystem Vendor ID: 1af4 00:21:30.605 Serial Number: 12343 00:21:30.605 Model Number: QEMU NVMe Ctrl 00:21:30.605 Firmware Version: 8.0.0 00:21:30.605 Recommended Arb Burst: 6 00:21:30.605 IEEE OUI Identifier: 00 54 52 00:21:30.605 Multi-path I/O 00:21:30.605 May have multiple subsystem ports: No 00:21:30.605 May have multiple controllers: Yes 00:21:30.605 Associated with SR-IOV VF: No 00:21:30.605 Max Data Transfer Size: 524288 00:21:30.605 Max Number of Namespaces: 256 00:21:30.605 Max Number of I/O Queues: 64 00:21:30.605 NVMe Specification Version (VS): 1.4 00:21:30.605 NVMe Specification Version (Identify): 1.4 00:21:30.605 Maximum Queue Entries: 2048 00:21:30.605 Contiguous Queues Required: Yes 00:21:30.605 Arbitration Mechanisms Supported 00:21:30.605 Weighted Round Robin: Not Supported 00:21:30.605 Vendor Specific: Not Supported 00:21:30.605 Reset Timeout: 7500 ms 00:21:30.605 Doorbell Stride: 4 bytes 00:21:30.605 NVM Subsystem Reset: Not Supported 00:21:30.605 Command Sets Supported 00:21:30.605 NVM Command Set: Supported 00:21:30.605 Boot Partition: Not Supported 00:21:30.605 Memory Page Size Minimum: 4096 bytes 00:21:30.605 Memory Page Size Maximum: 65536 bytes 00:21:30.605 Persistent Memory Region: Not Supported 00:21:30.605 Optional Asynchronous Events Supported 00:21:30.605 Namespace Attribute Notices: Supported 00:21:30.605 Firmware Activation Notices: Not Supported 00:21:30.605 ANA Change Notices: Not Supported 00:21:30.605 PLE Aggregate Log Change Notices: Not Supported 00:21:30.605 LBA Status Info Alert Notices: Not Supported 00:21:30.606 EGE Aggregate Log Change Notices: Not Supported 00:21:30.606 Normal NVM Subsystem Shutdown event: Not Supported 00:21:30.606 Zone Descriptor Change Notices: Not Supported 00:21:30.606 Discovery Log Change Notices: Not Supported 00:21:30.606 Controller Attributes 00:21:30.606 128-bit Host Identifier: Not Supported 00:21:30.606 Non-Operational Permissive Mode: Not Supported 00:21:30.606 NVM Sets: Not Supported 00:21:30.606 Read Recovery Levels: Not Supported 00:21:30.606 Endurance Groups: Supported 00:21:30.606 Predictable Latency Mode: Not Supported 00:21:30.606 Traffic Based Keep ALive: Not Supported 00:21:30.606 Namespace Granularity: Not Supported 00:21:30.606 SQ Associations: Not Supported 00:21:30.606 UUID List: Not Supported 00:21:30.606 Multi-Domain Subsystem: Not Supported 00:21:30.606 Fixed Capacity Management: Not Supported 00:21:30.606 Variable Capacity Management: Not Supported 00:21:30.606 Delete Endurance Group: Not Supported 00:21:30.606 Delete NVM Set: Not Supported 00:21:30.606 Extended LBA Formats Supported: Supported 00:21:30.606 Flexible Data Placement Supported: Supported 00:21:30.606 00:21:30.606 Controller Memory Buffer Support 00:21:30.606 ================================ 00:21:30.606 Supported: No 00:21:30.606 00:21:30.606 Persistent Memory Region Support 00:21:30.606 ================================ 00:21:30.606 Supported: No 00:21:30.606 00:21:30.606 Admin Command Set Attributes 00:21:30.606 ============================ 00:21:30.606 Security Send/Receive: Not Supported 00:21:30.606 Format NVM: Supported 00:21:30.606 Firmware Activate/Download: Not Supported 00:21:30.606 Namespace Management: Supported 00:21:30.606 Device Self-Test: Not Supported 00:21:30.606 Directives: Supported 00:21:30.606 NVMe-MI: Not Supported 00:21:30.606 Virtualization Management: Not Supported 00:21:30.606 Doorbell Buffer Config: Supported 00:21:30.606 Get LBA Status Capability: Not Supported 00:21:30.606 Command & Feature Lockdown Capability: Not Supported 00:21:30.606 Abort Command Limit: 4 00:21:30.606 Async Event Request Limit: 4 00:21:30.606 Number of Firmware Slots: N/A 00:21:30.606 Firmware Slot 1 Read-Only: N/A 00:21:30.606 Firmware Activation Without Reset: N/A 00:21:30.606 Multiple Update Detection Support: N/A 00:21:30.606 Firmware Update Granularity: No Information Provided 00:21:30.606 Per-Namespace SMART Log: Yes 00:21:30.606 Asymmetric Namespace Access Log Page: Not Supported 00:21:30.606 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:21:30.606 Command Effects Log Page: Supported 00:21:30.606 Get Log Page Extended Data: Supported 00:21:30.606 Telemetry Log Pages: Not Supported 00:21:30.606 Persistent Event Log Pages: Not Supported 00:21:30.606 Supported Log Pages Log Page: May Support 00:21:30.606 Commands Supported & Effects Log Page: Not Supported 00:21:30.606 Feature Identifiers & Effects Log Page:May Support 00:21:30.606 NVMe-MI Commands & Effects Log Page: May Support 00:21:30.606 Data Area 4 for Telemetry Log: Not Supported 00:21:30.606 Error Log Page Entries Supported: 1 00:21:30.606 Keep Alive: Not Supported 00:21:30.606 00:21:30.606 NVM Command Set Attributes 00:21:30.606 ========================== 00:21:30.606 Submission Queue Entry Size 00:21:30.606 Max: 64 00:21:30.606 Min: 64 00:21:30.606 Completion Queue Entry Size 00:21:30.606 Max: 16 00:21:30.606 Min: 16 00:21:30.606 Number of Namespaces: 256 00:21:30.606 Compare Command: Supported 00:21:30.606 Write Uncorrectable Command: Not Supported 00:21:30.606 Dataset Management Command: Supported 00:21:30.606 Write Zeroes Command: Supported 00:21:30.606 Set Features Save Field: Supported 00:21:30.606 Reservations: Not Supported 00:21:30.606 Timestamp: Supported 00:21:30.606 Copy: Supported 00:21:30.606 Volatile Write Cache: Present 00:21:30.606 Atomic Write Unit (Normal): 1 00:21:30.606 Atomic Write Unit (PFail): 1 00:21:30.606 Atomic Compare & Write Unit: 1 00:21:30.606 Fused Compare & Write: Not Supported 00:21:30.606 Scatter-Gather List 00:21:30.606 SGL Command Set: Supported 00:21:30.606 SGL Keyed: Not Supported 00:21:30.606 SGL Bit Bucket Descriptor: Not Supported 00:21:30.606 SGL Metadata Pointer: Not Supported 00:21:30.606 Oversized SGL: Not Supported 00:21:30.606 SGL Metadata Address: Not Supported 00:21:30.606 SGL Offset: Not Supported 00:21:30.606 Transport SGL Data Block: Not Supported 00:21:30.606 Replay Protected Memory Block: Not Supported 00:21:30.606 00:21:30.606 Firmware Slot Information 00:21:30.606 ========================= 00:21:30.606 Active slot: 1 00:21:30.606 Slot 1 Firmware Revision: 1.0 00:21:30.606 00:21:30.606 00:21:30.606 Commands Supported and Effects 00:21:30.606 ============================== 00:21:30.606 Admin Commands 00:21:30.606 -------------- 00:21:30.606 Delete I/O Submission Queue (00h): Supported 00:21:30.606 Create I/O Submission Queue (01h): Supported 00:21:30.606 Get Log Page (02h): Supported 00:21:30.606 Delete I/O Completion Queue (04h): Supported 00:21:30.606 Create I/O Completion Queue (05h): Supported 00:21:30.606 Identify (06h): Supported 00:21:30.606 Abort (08h): Supported 00:21:30.606 Set Features (09h): Supported 00:21:30.606 Get Features (0Ah): Supported 00:21:30.606 Asynchronous Event Request (0Ch): Supported 00:21:30.606 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:30.606 Directive Send (19h): Supported 00:21:30.606 Directive Receive (1Ah): Supported 00:21:30.606 Virtualization Management (1Ch): Supported 00:21:30.606 Doorbell Buffer Config (7Ch): Supported 00:21:30.606 Format NVM (80h): Supported LBA-Change 00:21:30.606 I/O Commands 00:21:30.606 ------------ 00:21:30.606 Flush (00h): Supported LBA-Change 00:21:30.606 Write (01h): Supported LBA-Change 00:21:30.606 Read (02h): Supported 00:21:30.606 Compare (05h): Supported 00:21:30.606 Write Zeroes (08h): Supported LBA-Change 00:21:30.606 Dataset Management (09h): Supported LBA-Change 00:21:30.606 Unknown (0Ch): Supported 00:21:30.606 Unknown (12h): Supported 00:21:30.606 Copy (19h): Supported LBA-Change 00:21:30.606 Unknown (1Dh): Supported LBA-Change 00:21:30.606 00:21:30.606 Error Log 00:21:30.606 ========= 00:21:30.606 00:21:30.606 Arbitration 00:21:30.606 =========== 00:21:30.606 Arbitration Burst: no limit 00:21:30.606 00:21:30.606 Power Management 00:21:30.606 ================ 00:21:30.606 Number of Power States: 1 00:21:30.606 Current Power State: Power State #0 00:21:30.606 Power State #0: 00:21:30.606 Max Power: 25.00 W 00:21:30.606 Non-Operational State: Operational 00:21:30.606 Entry Latency: 16 microseconds 00:21:30.606 Exit Latency: 4 microseconds 00:21:30.606 Relative Read Throughput: 0 00:21:30.606 Relative Read Latency: 0 00:21:30.606 Relative Write Throughput: 0 00:21:30.606 Relative Write Latency: 0 00:21:30.606 Idle Power: Not Reported 00:21:30.606 Active Power: Not Reported 00:21:30.606 Non-Operational Permissive Mode: Not Supported 00:21:30.606 00:21:30.606 Health Information 00:21:30.606 ================== 00:21:30.606 Critical Warnings: 00:21:30.606 Available Spare Space: OK 00:21:30.606 Temperature: OK 00:21:30.606 Device Reliability: OK 00:21:30.606 Read Only: No 00:21:30.606 Volatile Memory Backup: OK 00:21:30.606 Current Temperature: 323 Kelvin (50 Celsius) 00:21:30.606 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:30.606 Available Spare: 0% 00:21:30.606 Available Spare Threshold: 0% 00:21:30.606 Life Percentage Used: 0% 00:21:30.606 Data Units Read: 782 00:21:30.606 Data Units Written: 718 00:21:30.606 Host Read Commands: 32911 00:21:30.606 Host Write Commands: 32481 00:21:30.606 Controller Busy Time: 0 minutes 00:21:30.606 Power Cycles: 0 00:21:30.606 Power On Hours: 0 hours 00:21:30.606 Unsafe Shutdowns: 0 00:21:30.606 Unrecoverable Media Errors: 0 00:21:30.606 Lifetime Error Log Entries: 0 00:21:30.606 Warning Temperature Time: 0 minutes 00:21:30.606 Critical Temperature Time: 0 minutes 00:21:30.606 00:21:30.606 Number of Queues 00:21:30.606 ================ 00:21:30.606 Number of I/O Submission Queues: 64 00:21:30.606 Number of I/O Completion Queues: 64 00:21:30.606 00:21:30.606 ZNS Specific Controller Data 00:21:30.606 ============================ 00:21:30.606 Zone Append Size Limit: 0 00:21:30.606 00:21:30.606 00:21:30.606 Active Namespaces 00:21:30.606 ================= 00:21:30.606 Namespace ID:1 00:21:30.606 Error Recovery Timeout: Unlimited 00:21:30.606 Command Set Identifier: NVM (00h) 00:21:30.606 Deallocate: Supported 00:21:30.606 Deallocated/Unwritten Error: Supported 00:21:30.606 Deallocated Read Value: All 0x00 00:21:30.606 Deallocate in Write Zeroes: Not Supported 00:21:30.606 Deallocated Guard Field: 0xFFFF 00:21:30.607 Flush: Supported 00:21:30.607 Reservation: Not Supported 00:21:30.607 Namespace Sharing Capabilities: Multiple Controllers 00:21:30.607 Size (in LBAs): 262144 (1GiB) 00:21:30.607 Capacity (in LBAs): 262144 (1GiB) 00:21:30.607 Utilization (in LBAs): 262144 (1GiB) 00:21:30.607 Thin Provisioning: Not Supported 00:21:30.607 Per-NS Atomic Units: No 00:21:30.607 Maximum Single Source Range Length: 128 00:21:30.607 Maximum Copy Length: 128 00:21:30.607 Maximum Source Range Count: 128 00:21:30.607 NGUID/EUI64 Never Reused: No 00:21:30.607 Namespace Write Protected: No 00:21:30.607 Endurance group ID: 1 00:21:30.607 Number of LBA Formats: 8 00:21:30.607 Current LBA Format: LBA Format #04 00:21:30.607 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:30.607 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:30.607 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:30.607 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:30.607 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:30.607 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:30.607 LBA Format #06: Data Size[2024-04-17 14:40:39.181091] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69913 terminated unexpected 00:21:30.607 : 4096 Metadata Size: 16 00:21:30.607 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:30.607 00:21:30.607 Get Feature FDP: 00:21:30.607 ================ 00:21:30.607 Enabled: Yes 00:21:30.607 FDP configuration index: 0 00:21:30.607 00:21:30.607 FDP configurations log page 00:21:30.607 =========================== 00:21:30.607 Number of FDP configurations: 1 00:21:30.607 Version: 0 00:21:30.607 Size: 112 00:21:30.607 FDP Configuration Descriptor: 0 00:21:30.607 Descriptor Size: 96 00:21:30.607 Reclaim Group Identifier format: 2 00:21:30.607 FDP Volatile Write Cache: Not Present 00:21:30.607 FDP Configuration: Valid 00:21:30.607 Vendor Specific Size: 0 00:21:30.607 Number of Reclaim Groups: 2 00:21:30.607 Number of Recalim Unit Handles: 8 00:21:30.607 Max Placement Identifiers: 128 00:21:30.607 Number of Namespaces Suppprted: 256 00:21:30.607 Reclaim unit Nominal Size: 6000000 bytes 00:21:30.607 Estimated Reclaim Unit Time Limit: Not Reported 00:21:30.607 RUH Desc #000: RUH Type: Initially Isolated 00:21:30.607 RUH Desc #001: RUH Type: Initially Isolated 00:21:30.607 RUH Desc #002: RUH Type: Initially Isolated 00:21:30.607 RUH Desc #003: RUH Type: Initially Isolated 00:21:30.607 RUH Desc #004: RUH Type: Initially Isolated 00:21:30.607 RUH Desc #005: RUH Type: Initially Isolated 00:21:30.607 RUH Desc #006: RUH Type: Initially Isolated 00:21:30.607 RUH Desc #007: RUH Type: Initially Isolated 00:21:30.607 00:21:30.607 FDP reclaim unit handle usage log page 00:21:30.607 ====================================== 00:21:30.607 Number of Reclaim Unit Handles: 8 00:21:30.607 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:21:30.607 RUH Usage Desc #001: RUH Attributes: Unused 00:21:30.607 RUH Usage Desc #002: RUH Attributes: Unused 00:21:30.607 RUH Usage Desc #003: RUH Attributes: Unused 00:21:30.607 RUH Usage Desc #004: RUH Attributes: Unused 00:21:30.607 RUH Usage Desc #005: RUH Attributes: Unused 00:21:30.607 RUH Usage Desc #006: RUH Attributes: Unused 00:21:30.607 RUH Usage Desc #007: RUH Attributes: Unused 00:21:30.607 00:21:30.607 FDP statistics log page 00:21:30.607 ======================= 00:21:30.607 Host bytes with metadata written: 445030400 00:21:30.607 Media bytes with metadata written: 445095936 00:21:30.607 Media bytes erased: 0 00:21:30.607 00:21:30.607 FDP events log page 00:21:30.607 =================== 00:21:30.607 Number of FDP events: 0 00:21:30.607 00:21:30.607 ===================================================== 00:21:30.607 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:30.607 ===================================================== 00:21:30.607 Controller Capabilities/Features 00:21:30.607 ================================ 00:21:30.607 Vendor ID: 1b36 00:21:30.607 Subsystem Vendor ID: 1af4 00:21:30.607 Serial Number: 12342 00:21:30.607 Model Number: QEMU NVMe Ctrl 00:21:30.607 Firmware Version: 8.0.0 00:21:30.607 Recommended Arb Burst: 6 00:21:30.607 IEEE OUI Identifier: 00 54 52 00:21:30.607 Multi-path I/O 00:21:30.607 May have multiple subsystem ports: No 00:21:30.607 May have multiple controllers: No 00:21:30.607 Associated with SR-IOV VF: No 00:21:30.607 Max Data Transfer Size: 524288 00:21:30.607 Max Number of Namespaces: 256 00:21:30.607 Max Number of I/O Queues: 64 00:21:30.607 NVMe Specification Version (VS): 1.4 00:21:30.607 NVMe Specification Version (Identify): 1.4 00:21:30.607 Maximum Queue Entries: 2048 00:21:30.607 Contiguous Queues Required: Yes 00:21:30.607 Arbitration Mechanisms Supported 00:21:30.607 Weighted Round Robin: Not Supported 00:21:30.607 Vendor Specific: Not Supported 00:21:30.607 Reset Timeout: 7500 ms 00:21:30.607 Doorbell Stride: 4 bytes 00:21:30.607 NVM Subsystem Reset: Not Supported 00:21:30.607 Command Sets Supported 00:21:30.607 NVM Command Set: Supported 00:21:30.607 Boot Partition: Not Supported 00:21:30.607 Memory Page Size Minimum: 4096 bytes 00:21:30.607 Memory Page Size Maximum: 65536 bytes 00:21:30.607 Persistent Memory Region: Not Supported 00:21:30.607 Optional Asynchronous Events Supported 00:21:30.607 Namespace Attribute Notices: Supported 00:21:30.607 Firmware Activation Notices: Not Supported 00:21:30.607 ANA Change Notices: Not Supported 00:21:30.607 PLE Aggregate Log Change Notices: Not Supported 00:21:30.607 LBA Status Info Alert Notices: Not Supported 00:21:30.607 EGE Aggregate Log Change Notices: Not Supported 00:21:30.607 Normal NVM Subsystem Shutdown event: Not Supported 00:21:30.607 Zone Descriptor Change Notices: Not Supported 00:21:30.607 Discovery Log Change Notices: Not Supported 00:21:30.607 Controller Attributes 00:21:30.607 128-bit Host Identifier: Not Supported 00:21:30.607 Non-Operational Permissive Mode: Not Supported 00:21:30.607 NVM Sets: Not Supported 00:21:30.607 Read Recovery Levels: Not Supported 00:21:30.607 Endurance Groups: Not Supported 00:21:30.607 Predictable Latency Mode: Not Supported 00:21:30.607 Traffic Based Keep ALive: Not Supported 00:21:30.607 Namespace Granularity: Not Supported 00:21:30.607 SQ Associations: Not Supported 00:21:30.607 UUID List: Not Supported 00:21:30.607 Multi-Domain Subsystem: Not Supported 00:21:30.607 Fixed Capacity Management: Not Supported 00:21:30.607 Variable Capacity Management: Not Supported 00:21:30.607 Delete Endurance Group: Not Supported 00:21:30.607 Delete NVM Set: Not Supported 00:21:30.607 Extended LBA Formats Supported: Supported 00:21:30.607 Flexible Data Placement Supported: Not Supported 00:21:30.607 00:21:30.607 Controller Memory Buffer Support 00:21:30.607 ================================ 00:21:30.607 Supported: No 00:21:30.607 00:21:30.607 Persistent Memory Region Support 00:21:30.607 ================================ 00:21:30.607 Supported: No 00:21:30.607 00:21:30.607 Admin Command Set Attributes 00:21:30.607 ============================ 00:21:30.607 Security Send/Receive: Not Supported 00:21:30.607 Format NVM: Supported 00:21:30.607 Firmware Activate/Download: Not Supported 00:21:30.607 Namespace Management: Supported 00:21:30.607 Device Self-Test: Not Supported 00:21:30.607 Directives: Supported 00:21:30.607 NVMe-MI: Not Supported 00:21:30.607 Virtualization Management: Not Supported 00:21:30.607 Doorbell Buffer Config: Supported 00:21:30.607 Get LBA Status Capability: Not Supported 00:21:30.607 Command & Feature Lockdown Capability: Not Supported 00:21:30.607 Abort Command Limit: 4 00:21:30.607 Async Event Request Limit: 4 00:21:30.607 Number of Firmware Slots: N/A 00:21:30.607 Firmware Slot 1 Read-Only: N/A 00:21:30.608 Firmware Activation Without Reset: N/A 00:21:30.608 Multiple Update Detection Support: N/A 00:21:30.608 Firmware Update Granularity: No Information Provided 00:21:30.608 Per-Namespace SMART Log: Yes 00:21:30.608 Asymmetric Namespace Access Log Page: Not Supported 00:21:30.608 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:21:30.608 Command Effects Log Page: Supported 00:21:30.608 Get Log Page Extended Data: Supported 00:21:30.608 Telemetry Log Pages: Not Supported 00:21:30.608 Persistent Event Log Pages: Not Supported 00:21:30.608 Supported Log Pages Log Page: May Support 00:21:30.608 Commands Supported & Effects Log Page: Not Supported 00:21:30.608 Feature Identifiers & Effects Log Page:May Support 00:21:30.608 NVMe-MI Commands & Effects Log Page: May Support 00:21:30.608 Data Area 4 for Telemetry Log: Not Supported 00:21:30.608 Error Log Page Entries Supported: 1 00:21:30.608 Keep Alive: Not Supported 00:21:30.608 00:21:30.608 NVM Command Set Attributes 00:21:30.608 ========================== 00:21:30.608 Submission Queue Entry Size 00:21:30.608 Max: 64 00:21:30.608 Min: 64 00:21:30.608 Completion Queue Entry Size 00:21:30.608 Max: 16 00:21:30.608 Min: 16 00:21:30.608 Number of Namespaces: 256 00:21:30.608 Compare Command: Supported 00:21:30.608 Write Uncorrectable Command: Not Supported 00:21:30.608 Dataset Management Command: Supported 00:21:30.608 Write Zeroes Command: Supported 00:21:30.608 Set Features Save Field: Supported 00:21:30.608 Reservations: Not Supported 00:21:30.608 Timestamp: Supported 00:21:30.608 Copy: Supported 00:21:30.608 Volatile Write Cache: Present 00:21:30.608 Atomic Write Unit (Normal): 1 00:21:30.608 Atomic Write Unit (PFail): 1 00:21:30.608 Atomic Compare & Write Unit: 1 00:21:30.608 Fused Compare & Write: Not Supported 00:21:30.608 Scatter-Gather List 00:21:30.608 SGL Command Set: Supported 00:21:30.608 SGL Keyed: Not Supported 00:21:30.608 SGL Bit Bucket Descriptor: Not Supported 00:21:30.608 SGL Metadata Pointer: Not Supported 00:21:30.608 Oversized SGL: Not Supported 00:21:30.608 SGL Metadata Address: Not Supported 00:21:30.608 SGL Offset: Not Supported 00:21:30.608 Transport SGL Data Block: Not Supported 00:21:30.608 Replay Protected Memory Block: Not Supported 00:21:30.608 00:21:30.608 Firmware Slot Information 00:21:30.608 ========================= 00:21:30.608 Active slot: 1 00:21:30.608 Slot 1 Firmware Revision: 1.0 00:21:30.608 00:21:30.608 00:21:30.608 Commands Supported and Effects 00:21:30.608 ============================== 00:21:30.608 Admin Commands 00:21:30.608 -------------- 00:21:30.608 Delete I/O Submission Queue (00h): Supported 00:21:30.608 Create I/O Submission Queue (01h): Supported 00:21:30.608 Get Log Page (02h): Supported 00:21:30.608 Delete I/O Completion Queue (04h): Supported 00:21:30.608 Create I/O Completion Queue (05h): Supported 00:21:30.608 Identify (06h): Supported 00:21:30.608 Abort (08h): Supported 00:21:30.608 Set Features (09h): Supported 00:21:30.608 Get Features (0Ah): Supported 00:21:30.608 Asynchronous Event Request (0Ch): Supported 00:21:30.608 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:30.608 Directive Send (19h): Supported 00:21:30.608 Directive Receive (1Ah): Supported 00:21:30.608 Virtualization Management (1Ch): Supported 00:21:30.608 Doorbell Buffer Config (7Ch): Supported 00:21:30.608 Format NVM (80h): Supported LBA-Change 00:21:30.608 I/O Commands 00:21:30.608 ------------ 00:21:30.608 Flush (00h): Supported LBA-Change 00:21:30.608 Write (01h): Supported LBA-Change 00:21:30.608 Read (02h): Supported 00:21:30.608 Compare (05h): Supported 00:21:30.608 Write Zeroes (08h): Supported LBA-Change 00:21:30.608 Dataset Management (09h): Supported LBA-Change 00:21:30.608 Unknown (0Ch): Supported 00:21:30.608 Unknown (12h): Supported 00:21:30.608 Copy (19h): Supported LBA-Change 00:21:30.608 Unknown (1Dh): Supported LBA-Change 00:21:30.608 00:21:30.608 Error Log 00:21:30.608 ========= 00:21:30.608 00:21:30.608 Arbitration 00:21:30.608 =========== 00:21:30.608 Arbitration Burst: no limit 00:21:30.608 00:21:30.608 Power Management 00:21:30.608 ================ 00:21:30.608 Number of Power States: 1 00:21:30.608 Current Power State: Power State #0 00:21:30.608 Power State #0: 00:21:30.608 Max Power: 25.00 W 00:21:30.608 Non-Operational State: Operational 00:21:30.608 Entry Latency: 16 microseconds 00:21:30.608 Exit Latency: 4 microseconds 00:21:30.608 Relative Read Throughput: 0 00:21:30.608 Relative Read Latency: 0 00:21:30.608 Relative Write Throughput: 0 00:21:30.608 Relative Write Latency: 0 00:21:30.608 Idle Power: Not Reported 00:21:30.608 Active Power: Not Reported 00:21:30.608 Non-Operational Permissive Mode: Not Supported 00:21:30.608 00:21:30.608 Health Information 00:21:30.608 ================== 00:21:30.608 Critical Warnings: 00:21:30.608 Available Spare Space: OK 00:21:30.608 Temperature: OK 00:21:30.608 Device Reliability: OK 00:21:30.608 Read Only: No 00:21:30.608 Volatile Memory Backup: OK 00:21:30.608 Current Temperature: 323 Kelvin (50 Celsius) 00:21:30.608 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:30.608 Available Spare: 0% 00:21:30.608 Available Spare Threshold: 0% 00:21:30.608 Life Percentage Used: 0% 00:21:30.608 Data Units Read: 2259 00:21:30.608 Data Units Written: 1940 00:21:30.608 Host Read Commands: 99654 00:21:30.608 Host Write Commands: 95424 00:21:30.608 Controller Busy Time: 0 minutes 00:21:30.608 Power Cycles: 0 00:21:30.608 Power On Hours: 0 hours 00:21:30.608 Unsafe Shutdowns: 0 00:21:30.608 Unrecoverable Media Errors: 0 00:21:30.608 Lifetime Error Log Entries: 0 00:21:30.608 Warning Temperature Time: 0 minutes 00:21:30.608 Critical Temperature Time: 0 minutes 00:21:30.608 00:21:30.608 Number of Queues 00:21:30.608 ================ 00:21:30.608 Number of I/O Submission Queues: 64 00:21:30.608 Number of I/O Completion Queues: 64 00:21:30.608 00:21:30.608 ZNS Specific Controller Data 00:21:30.608 ============================ 00:21:30.608 Zone Append Size Limit: 0 00:21:30.608 00:21:30.608 00:21:30.608 Active Namespaces 00:21:30.608 ================= 00:21:30.608 Namespace ID:1 00:21:30.608 Error Recovery Timeout: Unlimited 00:21:30.608 Command Set Identifier: NVM (00h) 00:21:30.608 Deallocate: Supported 00:21:30.608 Deallocated/Unwritten Error: Supported 00:21:30.608 Deallocated Read Value: All 0x00 00:21:30.608 Deallocate in Write Zeroes: Not Supported 00:21:30.608 Deallocated Guard Field: 0xFFFF 00:21:30.608 Flush: Supported 00:21:30.608 Reservation: Not Supported 00:21:30.608 Namespace Sharing Capabilities: Private 00:21:30.608 Size (in LBAs): 1048576 (4GiB) 00:21:30.608 Capacity (in LBAs): 1048576 (4GiB) 00:21:30.608 Utilization (in LBAs): 1048576 (4GiB) 00:21:30.608 Thin Provisioning: Not Supported 00:21:30.608 Per-NS Atomic Units: No 00:21:30.867 Maximum Single Source Range Length: 128 00:21:30.867 Maximum Copy Length: 128 00:21:30.867 Maximum Source Range Count: 128 00:21:30.867 NGUID/EUI64 Never Reused: No 00:21:30.867 Namespace Write Protected: No 00:21:30.867 Number of LBA Formats: 8 00:21:30.867 Current LBA Format: LBA Format #04 00:21:30.867 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:30.867 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:30.867 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:30.867 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:30.867 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:30.867 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:30.867 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:30.867 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:30.867 00:21:30.867 Namespace ID:2 00:21:30.867 Error Recovery Timeout: Unlimited 00:21:30.867 Command Set Identifier: NVM (00h) 00:21:30.867 Deallocate: Supported 00:21:30.867 Deallocated/Unwritten Error: Supported 00:21:30.867 Deallocated Read Value: All 0x00 00:21:30.867 Deallocate in Write Zeroes: Not Supported 00:21:30.867 Deallocated Guard Field: 0xFFFF 00:21:30.867 Flush: Supported 00:21:30.867 Reservation: Not Supported 00:21:30.867 Namespace Sharing Capabilities: Private 00:21:30.867 Size (in LBAs): 1048576 (4GiB) 00:21:30.867 Capacity (in LBAs): 1048576 (4GiB) 00:21:30.867 Utilization (in LBAs): 1048576 (4GiB) 00:21:30.867 Thin Provisioning: Not Supported 00:21:30.867 Per-NS Atomic Units: No 00:21:30.867 Maximum Single Source Range Length: 128 00:21:30.867 Maximum Copy Length: 128 00:21:30.867 Maximum Source Range Count: 128 00:21:30.867 NGUID/EUI64 Never Reused: No 00:21:30.867 Namespace Write Protected: No 00:21:30.867 Number of LBA Formats: 8 00:21:30.867 Current LBA Format: LBA Format #04 00:21:30.867 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:30.867 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:30.867 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:30.867 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:30.867 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:30.867 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:30.867 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:30.867 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:30.867 00:21:30.867 Namespace ID:3 00:21:30.867 Error Recovery Timeout: Unlimited 00:21:30.867 Command Set Identifier: NVM (00h) 00:21:30.867 Deallocate: Supported 00:21:30.867 Deallocated/Unwritten Error: Supported 00:21:30.867 Deallocated Read Value: All 0x00 00:21:30.867 Deallocate in Write Zeroes: Not Supported 00:21:30.867 Deallocated Guard Field: 0xFFFF 00:21:30.867 Flush: Supported 00:21:30.867 Reservation: Not Supported 00:21:30.867 Namespace Sharing Capabilities: Private 00:21:30.867 Size (in LBAs): 1048576 (4GiB) 00:21:30.867 Capacity (in LBAs): 1048576 (4GiB) 00:21:30.867 Utilization (in LBAs): 1048576 (4GiB) 00:21:30.867 Thin Provisioning: Not Supported 00:21:30.867 Per-NS Atomic Units: No 00:21:30.867 Maximum Single Source Range Length: 128 00:21:30.867 Maximum Copy Length: 128 00:21:30.867 Maximum Source Range Count: 128 00:21:30.867 NGUID/EUI64 Never Reused: No 00:21:30.867 Namespace Write Protected: No 00:21:30.868 Number of LBA Formats: 8 00:21:30.868 Current LBA Format: LBA Format #04 00:21:30.868 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:30.868 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:30.868 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:30.868 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:30.868 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:30.868 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:30.868 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:30.868 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:30.868 00:21:30.868 14:40:39 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:21:30.868 14:40:39 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:21:31.127 ===================================================== 00:21:31.127 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:31.127 ===================================================== 00:21:31.127 Controller Capabilities/Features 00:21:31.127 ================================ 00:21:31.127 Vendor ID: 1b36 00:21:31.127 Subsystem Vendor ID: 1af4 00:21:31.127 Serial Number: 12340 00:21:31.127 Model Number: QEMU NVMe Ctrl 00:21:31.127 Firmware Version: 8.0.0 00:21:31.127 Recommended Arb Burst: 6 00:21:31.127 IEEE OUI Identifier: 00 54 52 00:21:31.127 Multi-path I/O 00:21:31.127 May have multiple subsystem ports: No 00:21:31.127 May have multiple controllers: No 00:21:31.127 Associated with SR-IOV VF: No 00:21:31.127 Max Data Transfer Size: 524288 00:21:31.127 Max Number of Namespaces: 256 00:21:31.127 Max Number of I/O Queues: 64 00:21:31.127 NVMe Specification Version (VS): 1.4 00:21:31.127 NVMe Specification Version (Identify): 1.4 00:21:31.127 Maximum Queue Entries: 2048 00:21:31.127 Contiguous Queues Required: Yes 00:21:31.127 Arbitration Mechanisms Supported 00:21:31.127 Weighted Round Robin: Not Supported 00:21:31.127 Vendor Specific: Not Supported 00:21:31.127 Reset Timeout: 7500 ms 00:21:31.127 Doorbell Stride: 4 bytes 00:21:31.127 NVM Subsystem Reset: Not Supported 00:21:31.127 Command Sets Supported 00:21:31.127 NVM Command Set: Supported 00:21:31.127 Boot Partition: Not Supported 00:21:31.127 Memory Page Size Minimum: 4096 bytes 00:21:31.127 Memory Page Size Maximum: 65536 bytes 00:21:31.127 Persistent Memory Region: Not Supported 00:21:31.127 Optional Asynchronous Events Supported 00:21:31.127 Namespace Attribute Notices: Supported 00:21:31.127 Firmware Activation Notices: Not Supported 00:21:31.127 ANA Change Notices: Not Supported 00:21:31.127 PLE Aggregate Log Change Notices: Not Supported 00:21:31.127 LBA Status Info Alert Notices: Not Supported 00:21:31.127 EGE Aggregate Log Change Notices: Not Supported 00:21:31.127 Normal NVM Subsystem Shutdown event: Not Supported 00:21:31.127 Zone Descriptor Change Notices: Not Supported 00:21:31.127 Discovery Log Change Notices: Not Supported 00:21:31.127 Controller Attributes 00:21:31.127 128-bit Host Identifier: Not Supported 00:21:31.127 Non-Operational Permissive Mode: Not Supported 00:21:31.127 NVM Sets: Not Supported 00:21:31.127 Read Recovery Levels: Not Supported 00:21:31.127 Endurance Groups: Not Supported 00:21:31.127 Predictable Latency Mode: Not Supported 00:21:31.127 Traffic Based Keep ALive: Not Supported 00:21:31.127 Namespace Granularity: Not Supported 00:21:31.127 SQ Associations: Not Supported 00:21:31.127 UUID List: Not Supported 00:21:31.127 Multi-Domain Subsystem: Not Supported 00:21:31.127 Fixed Capacity Management: Not Supported 00:21:31.127 Variable Capacity Management: Not Supported 00:21:31.127 Delete Endurance Group: Not Supported 00:21:31.127 Delete NVM Set: Not Supported 00:21:31.127 Extended LBA Formats Supported: Supported 00:21:31.127 Flexible Data Placement Supported: Not Supported 00:21:31.127 00:21:31.127 Controller Memory Buffer Support 00:21:31.127 ================================ 00:21:31.127 Supported: No 00:21:31.127 00:21:31.127 Persistent Memory Region Support 00:21:31.127 ================================ 00:21:31.127 Supported: No 00:21:31.127 00:21:31.127 Admin Command Set Attributes 00:21:31.127 ============================ 00:21:31.127 Security Send/Receive: Not Supported 00:21:31.127 Format NVM: Supported 00:21:31.127 Firmware Activate/Download: Not Supported 00:21:31.127 Namespace Management: Supported 00:21:31.127 Device Self-Test: Not Supported 00:21:31.127 Directives: Supported 00:21:31.127 NVMe-MI: Not Supported 00:21:31.127 Virtualization Management: Not Supported 00:21:31.127 Doorbell Buffer Config: Supported 00:21:31.127 Get LBA Status Capability: Not Supported 00:21:31.127 Command & Feature Lockdown Capability: Not Supported 00:21:31.127 Abort Command Limit: 4 00:21:31.127 Async Event Request Limit: 4 00:21:31.127 Number of Firmware Slots: N/A 00:21:31.127 Firmware Slot 1 Read-Only: N/A 00:21:31.127 Firmware Activation Without Reset: N/A 00:21:31.127 Multiple Update Detection Support: N/A 00:21:31.127 Firmware Update Granularity: No Information Provided 00:21:31.127 Per-Namespace SMART Log: Yes 00:21:31.127 Asymmetric Namespace Access Log Page: Not Supported 00:21:31.127 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:21:31.127 Command Effects Log Page: Supported 00:21:31.128 Get Log Page Extended Data: Supported 00:21:31.128 Telemetry Log Pages: Not Supported 00:21:31.128 Persistent Event Log Pages: Not Supported 00:21:31.128 Supported Log Pages Log Page: May Support 00:21:31.128 Commands Supported & Effects Log Page: Not Supported 00:21:31.128 Feature Identifiers & Effects Log Page:May Support 00:21:31.128 NVMe-MI Commands & Effects Log Page: May Support 00:21:31.128 Data Area 4 for Telemetry Log: Not Supported 00:21:31.128 Error Log Page Entries Supported: 1 00:21:31.128 Keep Alive: Not Supported 00:21:31.128 00:21:31.128 NVM Command Set Attributes 00:21:31.128 ========================== 00:21:31.128 Submission Queue Entry Size 00:21:31.128 Max: 64 00:21:31.128 Min: 64 00:21:31.128 Completion Queue Entry Size 00:21:31.128 Max: 16 00:21:31.128 Min: 16 00:21:31.128 Number of Namespaces: 256 00:21:31.128 Compare Command: Supported 00:21:31.128 Write Uncorrectable Command: Not Supported 00:21:31.128 Dataset Management Command: Supported 00:21:31.128 Write Zeroes Command: Supported 00:21:31.128 Set Features Save Field: Supported 00:21:31.128 Reservations: Not Supported 00:21:31.128 Timestamp: Supported 00:21:31.128 Copy: Supported 00:21:31.128 Volatile Write Cache: Present 00:21:31.128 Atomic Write Unit (Normal): 1 00:21:31.128 Atomic Write Unit (PFail): 1 00:21:31.128 Atomic Compare & Write Unit: 1 00:21:31.128 Fused Compare & Write: Not Supported 00:21:31.128 Scatter-Gather List 00:21:31.128 SGL Command Set: Supported 00:21:31.128 SGL Keyed: Not Supported 00:21:31.128 SGL Bit Bucket Descriptor: Not Supported 00:21:31.128 SGL Metadata Pointer: Not Supported 00:21:31.128 Oversized SGL: Not Supported 00:21:31.128 SGL Metadata Address: Not Supported 00:21:31.128 SGL Offset: Not Supported 00:21:31.128 Transport SGL Data Block: Not Supported 00:21:31.128 Replay Protected Memory Block: Not Supported 00:21:31.128 00:21:31.128 Firmware Slot Information 00:21:31.128 ========================= 00:21:31.128 Active slot: 1 00:21:31.128 Slot 1 Firmware Revision: 1.0 00:21:31.128 00:21:31.128 00:21:31.128 Commands Supported and Effects 00:21:31.128 ============================== 00:21:31.128 Admin Commands 00:21:31.128 -------------- 00:21:31.128 Delete I/O Submission Queue (00h): Supported 00:21:31.128 Create I/O Submission Queue (01h): Supported 00:21:31.128 Get Log Page (02h): Supported 00:21:31.128 Delete I/O Completion Queue (04h): Supported 00:21:31.128 Create I/O Completion Queue (05h): Supported 00:21:31.128 Identify (06h): Supported 00:21:31.128 Abort (08h): Supported 00:21:31.128 Set Features (09h): Supported 00:21:31.128 Get Features (0Ah): Supported 00:21:31.128 Asynchronous Event Request (0Ch): Supported 00:21:31.128 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:31.128 Directive Send (19h): Supported 00:21:31.128 Directive Receive (1Ah): Supported 00:21:31.128 Virtualization Management (1Ch): Supported 00:21:31.128 Doorbell Buffer Config (7Ch): Supported 00:21:31.128 Format NVM (80h): Supported LBA-Change 00:21:31.128 I/O Commands 00:21:31.128 ------------ 00:21:31.128 Flush (00h): Supported LBA-Change 00:21:31.128 Write (01h): Supported LBA-Change 00:21:31.128 Read (02h): Supported 00:21:31.128 Compare (05h): Supported 00:21:31.128 Write Zeroes (08h): Supported LBA-Change 00:21:31.128 Dataset Management (09h): Supported LBA-Change 00:21:31.128 Unknown (0Ch): Supported 00:21:31.128 Unknown (12h): Supported 00:21:31.128 Copy (19h): Supported LBA-Change 00:21:31.128 Unknown (1Dh): Supported LBA-Change 00:21:31.128 00:21:31.128 Error Log 00:21:31.128 ========= 00:21:31.128 00:21:31.128 Arbitration 00:21:31.128 =========== 00:21:31.128 Arbitration Burst: no limit 00:21:31.128 00:21:31.128 Power Management 00:21:31.128 ================ 00:21:31.128 Number of Power States: 1 00:21:31.128 Current Power State: Power State #0 00:21:31.128 Power State #0: 00:21:31.128 Max Power: 25.00 W 00:21:31.128 Non-Operational State: Operational 00:21:31.128 Entry Latency: 16 microseconds 00:21:31.128 Exit Latency: 4 microseconds 00:21:31.128 Relative Read Throughput: 0 00:21:31.128 Relative Read Latency: 0 00:21:31.128 Relative Write Throughput: 0 00:21:31.128 Relative Write Latency: 0 00:21:31.128 Idle Power: Not Reported 00:21:31.128 Active Power: Not Reported 00:21:31.128 Non-Operational Permissive Mode: Not Supported 00:21:31.128 00:21:31.128 Health Information 00:21:31.128 ================== 00:21:31.128 Critical Warnings: 00:21:31.128 Available Spare Space: OK 00:21:31.128 Temperature: OK 00:21:31.128 Device Reliability: OK 00:21:31.128 Read Only: No 00:21:31.128 Volatile Memory Backup: OK 00:21:31.128 Current Temperature: 323 Kelvin (50 Celsius) 00:21:31.128 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:31.128 Available Spare: 0% 00:21:31.128 Available Spare Threshold: 0% 00:21:31.128 Life Percentage Used: 0% 00:21:31.128 Data Units Read: 1050 00:21:31.128 Data Units Written: 882 00:21:31.128 Host Read Commands: 47535 00:21:31.128 Host Write Commands: 46022 00:21:31.128 Controller Busy Time: 0 minutes 00:21:31.128 Power Cycles: 0 00:21:31.128 Power On Hours: 0 hours 00:21:31.128 Unsafe Shutdowns: 0 00:21:31.128 Unrecoverable Media Errors: 0 00:21:31.128 Lifetime Error Log Entries: 0 00:21:31.128 Warning Temperature Time: 0 minutes 00:21:31.128 Critical Temperature Time: 0 minutes 00:21:31.128 00:21:31.128 Number of Queues 00:21:31.128 ================ 00:21:31.128 Number of I/O Submission Queues: 64 00:21:31.128 Number of I/O Completion Queues: 64 00:21:31.128 00:21:31.128 ZNS Specific Controller Data 00:21:31.128 ============================ 00:21:31.128 Zone Append Size Limit: 0 00:21:31.128 00:21:31.128 00:21:31.128 Active Namespaces 00:21:31.128 ================= 00:21:31.128 Namespace ID:1 00:21:31.128 Error Recovery Timeout: Unlimited 00:21:31.128 Command Set Identifier: NVM (00h) 00:21:31.128 Deallocate: Supported 00:21:31.128 Deallocated/Unwritten Error: Supported 00:21:31.128 Deallocated Read Value: All 0x00 00:21:31.128 Deallocate in Write Zeroes: Not Supported 00:21:31.128 Deallocated Guard Field: 0xFFFF 00:21:31.128 Flush: Supported 00:21:31.128 Reservation: Not Supported 00:21:31.128 Metadata Transferred as: Separate Metadata Buffer 00:21:31.128 Namespace Sharing Capabilities: Private 00:21:31.128 Size (in LBAs): 1548666 (5GiB) 00:21:31.128 Capacity (in LBAs): 1548666 (5GiB) 00:21:31.128 Utilization (in LBAs): 1548666 (5GiB) 00:21:31.128 Thin Provisioning: Not Supported 00:21:31.128 Per-NS Atomic Units: No 00:21:31.128 Maximum Single Source Range Length: 128 00:21:31.128 Maximum Copy Length: 128 00:21:31.128 Maximum Source Range Count: 128 00:21:31.128 NGUID/EUI64 Never Reused: No 00:21:31.128 Namespace Write Protected: No 00:21:31.128 Number of LBA Formats: 8 00:21:31.128 Current LBA Format: LBA Format #07 00:21:31.128 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:31.128 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:31.128 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:31.128 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:31.128 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:31.128 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:31.128 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:31.128 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:31.128 00:21:31.128 14:40:39 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:21:31.128 14:40:39 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:21:31.387 ===================================================== 00:21:31.388 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:31.388 ===================================================== 00:21:31.388 Controller Capabilities/Features 00:21:31.388 ================================ 00:21:31.388 Vendor ID: 1b36 00:21:31.388 Subsystem Vendor ID: 1af4 00:21:31.388 Serial Number: 12341 00:21:31.388 Model Number: QEMU NVMe Ctrl 00:21:31.388 Firmware Version: 8.0.0 00:21:31.388 Recommended Arb Burst: 6 00:21:31.388 IEEE OUI Identifier: 00 54 52 00:21:31.388 Multi-path I/O 00:21:31.388 May have multiple subsystem ports: No 00:21:31.388 May have multiple controllers: No 00:21:31.388 Associated with SR-IOV VF: No 00:21:31.388 Max Data Transfer Size: 524288 00:21:31.388 Max Number of Namespaces: 256 00:21:31.388 Max Number of I/O Queues: 64 00:21:31.388 NVMe Specification Version (VS): 1.4 00:21:31.388 NVMe Specification Version (Identify): 1.4 00:21:31.388 Maximum Queue Entries: 2048 00:21:31.388 Contiguous Queues Required: Yes 00:21:31.388 Arbitration Mechanisms Supported 00:21:31.388 Weighted Round Robin: Not Supported 00:21:31.388 Vendor Specific: Not Supported 00:21:31.388 Reset Timeout: 7500 ms 00:21:31.388 Doorbell Stride: 4 bytes 00:21:31.388 NVM Subsystem Reset: Not Supported 00:21:31.388 Command Sets Supported 00:21:31.388 NVM Command Set: Supported 00:21:31.388 Boot Partition: Not Supported 00:21:31.388 Memory Page Size Minimum: 4096 bytes 00:21:31.388 Memory Page Size Maximum: 65536 bytes 00:21:31.388 Persistent Memory Region: Not Supported 00:21:31.388 Optional Asynchronous Events Supported 00:21:31.388 Namespace Attribute Notices: Supported 00:21:31.388 Firmware Activation Notices: Not Supported 00:21:31.388 ANA Change Notices: Not Supported 00:21:31.388 PLE Aggregate Log Change Notices: Not Supported 00:21:31.388 LBA Status Info Alert Notices: Not Supported 00:21:31.388 EGE Aggregate Log Change Notices: Not Supported 00:21:31.388 Normal NVM Subsystem Shutdown event: Not Supported 00:21:31.388 Zone Descriptor Change Notices: Not Supported 00:21:31.388 Discovery Log Change Notices: Not Supported 00:21:31.388 Controller Attributes 00:21:31.388 128-bit Host Identifier: Not Supported 00:21:31.388 Non-Operational Permissive Mode: Not Supported 00:21:31.388 NVM Sets: Not Supported 00:21:31.388 Read Recovery Levels: Not Supported 00:21:31.388 Endurance Groups: Not Supported 00:21:31.388 Predictable Latency Mode: Not Supported 00:21:31.388 Traffic Based Keep ALive: Not Supported 00:21:31.388 Namespace Granularity: Not Supported 00:21:31.388 SQ Associations: Not Supported 00:21:31.388 UUID List: Not Supported 00:21:31.388 Multi-Domain Subsystem: Not Supported 00:21:31.388 Fixed Capacity Management: Not Supported 00:21:31.388 Variable Capacity Management: Not Supported 00:21:31.388 Delete Endurance Group: Not Supported 00:21:31.388 Delete NVM Set: Not Supported 00:21:31.388 Extended LBA Formats Supported: Supported 00:21:31.388 Flexible Data Placement Supported: Not Supported 00:21:31.388 00:21:31.388 Controller Memory Buffer Support 00:21:31.388 ================================ 00:21:31.388 Supported: No 00:21:31.388 00:21:31.388 Persistent Memory Region Support 00:21:31.388 ================================ 00:21:31.388 Supported: No 00:21:31.388 00:21:31.388 Admin Command Set Attributes 00:21:31.388 ============================ 00:21:31.388 Security Send/Receive: Not Supported 00:21:31.388 Format NVM: Supported 00:21:31.388 Firmware Activate/Download: Not Supported 00:21:31.388 Namespace Management: Supported 00:21:31.388 Device Self-Test: Not Supported 00:21:31.388 Directives: Supported 00:21:31.388 NVMe-MI: Not Supported 00:21:31.388 Virtualization Management: Not Supported 00:21:31.388 Doorbell Buffer Config: Supported 00:21:31.388 Get LBA Status Capability: Not Supported 00:21:31.388 Command & Feature Lockdown Capability: Not Supported 00:21:31.388 Abort Command Limit: 4 00:21:31.388 Async Event Request Limit: 4 00:21:31.388 Number of Firmware Slots: N/A 00:21:31.388 Firmware Slot 1 Read-Only: N/A 00:21:31.388 Firmware Activation Without Reset: N/A 00:21:31.388 Multiple Update Detection Support: N/A 00:21:31.388 Firmware Update Granularity: No Information Provided 00:21:31.388 Per-Namespace SMART Log: Yes 00:21:31.388 Asymmetric Namespace Access Log Page: Not Supported 00:21:31.388 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:21:31.388 Command Effects Log Page: Supported 00:21:31.388 Get Log Page Extended Data: Supported 00:21:31.388 Telemetry Log Pages: Not Supported 00:21:31.388 Persistent Event Log Pages: Not Supported 00:21:31.388 Supported Log Pages Log Page: May Support 00:21:31.388 Commands Supported & Effects Log Page: Not Supported 00:21:31.388 Feature Identifiers & Effects Log Page:May Support 00:21:31.388 NVMe-MI Commands & Effects Log Page: May Support 00:21:31.388 Data Area 4 for Telemetry Log: Not Supported 00:21:31.388 Error Log Page Entries Supported: 1 00:21:31.388 Keep Alive: Not Supported 00:21:31.388 00:21:31.388 NVM Command Set Attributes 00:21:31.388 ========================== 00:21:31.388 Submission Queue Entry Size 00:21:31.388 Max: 64 00:21:31.388 Min: 64 00:21:31.388 Completion Queue Entry Size 00:21:31.388 Max: 16 00:21:31.388 Min: 16 00:21:31.388 Number of Namespaces: 256 00:21:31.388 Compare Command: Supported 00:21:31.388 Write Uncorrectable Command: Not Supported 00:21:31.388 Dataset Management Command: Supported 00:21:31.388 Write Zeroes Command: Supported 00:21:31.388 Set Features Save Field: Supported 00:21:31.388 Reservations: Not Supported 00:21:31.388 Timestamp: Supported 00:21:31.388 Copy: Supported 00:21:31.388 Volatile Write Cache: Present 00:21:31.388 Atomic Write Unit (Normal): 1 00:21:31.388 Atomic Write Unit (PFail): 1 00:21:31.388 Atomic Compare & Write Unit: 1 00:21:31.388 Fused Compare & Write: Not Supported 00:21:31.388 Scatter-Gather List 00:21:31.388 SGL Command Set: Supported 00:21:31.388 SGL Keyed: Not Supported 00:21:31.388 SGL Bit Bucket Descriptor: Not Supported 00:21:31.388 SGL Metadata Pointer: Not Supported 00:21:31.388 Oversized SGL: Not Supported 00:21:31.388 SGL Metadata Address: Not Supported 00:21:31.388 SGL Offset: Not Supported 00:21:31.388 Transport SGL Data Block: Not Supported 00:21:31.388 Replay Protected Memory Block: Not Supported 00:21:31.388 00:21:31.388 Firmware Slot Information 00:21:31.388 ========================= 00:21:31.388 Active slot: 1 00:21:31.388 Slot 1 Firmware Revision: 1.0 00:21:31.388 00:21:31.388 00:21:31.388 Commands Supported and Effects 00:21:31.388 ============================== 00:21:31.388 Admin Commands 00:21:31.388 -------------- 00:21:31.388 Delete I/O Submission Queue (00h): Supported 00:21:31.388 Create I/O Submission Queue (01h): Supported 00:21:31.388 Get Log Page (02h): Supported 00:21:31.388 Delete I/O Completion Queue (04h): Supported 00:21:31.388 Create I/O Completion Queue (05h): Supported 00:21:31.388 Identify (06h): Supported 00:21:31.388 Abort (08h): Supported 00:21:31.388 Set Features (09h): Supported 00:21:31.388 Get Features (0Ah): Supported 00:21:31.388 Asynchronous Event Request (0Ch): Supported 00:21:31.388 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:31.388 Directive Send (19h): Supported 00:21:31.388 Directive Receive (1Ah): Supported 00:21:31.388 Virtualization Management (1Ch): Supported 00:21:31.388 Doorbell Buffer Config (7Ch): Supported 00:21:31.388 Format NVM (80h): Supported LBA-Change 00:21:31.388 I/O Commands 00:21:31.388 ------------ 00:21:31.388 Flush (00h): Supported LBA-Change 00:21:31.388 Write (01h): Supported LBA-Change 00:21:31.388 Read (02h): Supported 00:21:31.388 Compare (05h): Supported 00:21:31.388 Write Zeroes (08h): Supported LBA-Change 00:21:31.388 Dataset Management (09h): Supported LBA-Change 00:21:31.388 Unknown (0Ch): Supported 00:21:31.388 Unknown (12h): Supported 00:21:31.388 Copy (19h): Supported LBA-Change 00:21:31.388 Unknown (1Dh): Supported LBA-Change 00:21:31.388 00:21:31.388 Error Log 00:21:31.388 ========= 00:21:31.388 00:21:31.388 Arbitration 00:21:31.388 =========== 00:21:31.388 Arbitration Burst: no limit 00:21:31.388 00:21:31.388 Power Management 00:21:31.388 ================ 00:21:31.388 Number of Power States: 1 00:21:31.388 Current Power State: Power State #0 00:21:31.388 Power State #0: 00:21:31.388 Max Power: 25.00 W 00:21:31.388 Non-Operational State: Operational 00:21:31.388 Entry Latency: 16 microseconds 00:21:31.388 Exit Latency: 4 microseconds 00:21:31.388 Relative Read Throughput: 0 00:21:31.388 Relative Read Latency: 0 00:21:31.388 Relative Write Throughput: 0 00:21:31.388 Relative Write Latency: 0 00:21:31.647 Idle Power: Not Reported 00:21:31.647 Active Power: Not Reported 00:21:31.647 Non-Operational Permissive Mode: Not Supported 00:21:31.647 00:21:31.647 Health Information 00:21:31.647 ================== 00:21:31.647 Critical Warnings: 00:21:31.647 Available Spare Space: OK 00:21:31.647 Temperature: OK 00:21:31.647 Device Reliability: OK 00:21:31.647 Read Only: No 00:21:31.647 Volatile Memory Backup: OK 00:21:31.647 Current Temperature: 323 Kelvin (50 Celsius) 00:21:31.647 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:31.647 Available Spare: 0% 00:21:31.647 Available Spare Threshold: 0% 00:21:31.647 Life Percentage Used: 0% 00:21:31.647 Data Units Read: 733 00:21:31.647 Data Units Written: 579 00:21:31.647 Host Read Commands: 33359 00:21:31.647 Host Write Commands: 31037 00:21:31.647 Controller Busy Time: 0 minutes 00:21:31.647 Power Cycles: 0 00:21:31.647 Power On Hours: 0 hours 00:21:31.647 Unsafe Shutdowns: 0 00:21:31.647 Unrecoverable Media Errors: 0 00:21:31.647 Lifetime Error Log Entries: 0 00:21:31.647 Warning Temperature Time: 0 minutes 00:21:31.647 Critical Temperature Time: 0 minutes 00:21:31.647 00:21:31.647 Number of Queues 00:21:31.647 ================ 00:21:31.647 Number of I/O Submission Queues: 64 00:21:31.647 Number of I/O Completion Queues: 64 00:21:31.647 00:21:31.647 ZNS Specific Controller Data 00:21:31.647 ============================ 00:21:31.647 Zone Append Size Limit: 0 00:21:31.647 00:21:31.647 00:21:31.647 Active Namespaces 00:21:31.647 ================= 00:21:31.647 Namespace ID:1 00:21:31.647 Error Recovery Timeout: Unlimited 00:21:31.647 Command Set Identifier: NVM (00h) 00:21:31.647 Deallocate: Supported 00:21:31.647 Deallocated/Unwritten Error: Supported 00:21:31.647 Deallocated Read Value: All 0x00 00:21:31.647 Deallocate in Write Zeroes: Not Supported 00:21:31.647 Deallocated Guard Field: 0xFFFF 00:21:31.647 Flush: Supported 00:21:31.647 Reservation: Not Supported 00:21:31.647 Namespace Sharing Capabilities: Private 00:21:31.647 Size (in LBAs): 1310720 (5GiB) 00:21:31.647 Capacity (in LBAs): 1310720 (5GiB) 00:21:31.647 Utilization (in LBAs): 1310720 (5GiB) 00:21:31.647 Thin Provisioning: Not Supported 00:21:31.647 Per-NS Atomic Units: No 00:21:31.647 Maximum Single Source Range Length: 128 00:21:31.647 Maximum Copy Length: 128 00:21:31.647 Maximum Source Range Count: 128 00:21:31.647 NGUID/EUI64 Never Reused: No 00:21:31.647 Namespace Write Protected: No 00:21:31.647 Number of LBA Formats: 8 00:21:31.647 Current LBA Format: LBA Format #04 00:21:31.647 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:31.647 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:31.647 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:31.647 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:31.647 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:31.647 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:31.647 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:31.647 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:31.647 00:21:31.647 14:40:40 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:21:31.647 14:40:40 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:21:31.906 ===================================================== 00:21:31.906 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:31.906 ===================================================== 00:21:31.906 Controller Capabilities/Features 00:21:31.906 ================================ 00:21:31.906 Vendor ID: 1b36 00:21:31.906 Subsystem Vendor ID: 1af4 00:21:31.906 Serial Number: 12342 00:21:31.906 Model Number: QEMU NVMe Ctrl 00:21:31.906 Firmware Version: 8.0.0 00:21:31.906 Recommended Arb Burst: 6 00:21:31.906 IEEE OUI Identifier: 00 54 52 00:21:31.906 Multi-path I/O 00:21:31.906 May have multiple subsystem ports: No 00:21:31.906 May have multiple controllers: No 00:21:31.906 Associated with SR-IOV VF: No 00:21:31.906 Max Data Transfer Size: 524288 00:21:31.906 Max Number of Namespaces: 256 00:21:31.906 Max Number of I/O Queues: 64 00:21:31.906 NVMe Specification Version (VS): 1.4 00:21:31.906 NVMe Specification Version (Identify): 1.4 00:21:31.906 Maximum Queue Entries: 2048 00:21:31.906 Contiguous Queues Required: Yes 00:21:31.906 Arbitration Mechanisms Supported 00:21:31.906 Weighted Round Robin: Not Supported 00:21:31.906 Vendor Specific: Not Supported 00:21:31.906 Reset Timeout: 7500 ms 00:21:31.906 Doorbell Stride: 4 bytes 00:21:31.906 NVM Subsystem Reset: Not Supported 00:21:31.906 Command Sets Supported 00:21:31.906 NVM Command Set: Supported 00:21:31.906 Boot Partition: Not Supported 00:21:31.906 Memory Page Size Minimum: 4096 bytes 00:21:31.906 Memory Page Size Maximum: 65536 bytes 00:21:31.907 Persistent Memory Region: Not Supported 00:21:31.907 Optional Asynchronous Events Supported 00:21:31.907 Namespace Attribute Notices: Supported 00:21:31.907 Firmware Activation Notices: Not Supported 00:21:31.907 ANA Change Notices: Not Supported 00:21:31.907 PLE Aggregate Log Change Notices: Not Supported 00:21:31.907 LBA Status Info Alert Notices: Not Supported 00:21:31.907 EGE Aggregate Log Change Notices: Not Supported 00:21:31.907 Normal NVM Subsystem Shutdown event: Not Supported 00:21:31.907 Zone Descriptor Change Notices: Not Supported 00:21:31.907 Discovery Log Change Notices: Not Supported 00:21:31.907 Controller Attributes 00:21:31.907 128-bit Host Identifier: Not Supported 00:21:31.907 Non-Operational Permissive Mode: Not Supported 00:21:31.907 NVM Sets: Not Supported 00:21:31.907 Read Recovery Levels: Not Supported 00:21:31.907 Endurance Groups: Not Supported 00:21:31.907 Predictable Latency Mode: Not Supported 00:21:31.907 Traffic Based Keep ALive: Not Supported 00:21:31.907 Namespace Granularity: Not Supported 00:21:31.907 SQ Associations: Not Supported 00:21:31.907 UUID List: Not Supported 00:21:31.907 Multi-Domain Subsystem: Not Supported 00:21:31.907 Fixed Capacity Management: Not Supported 00:21:31.907 Variable Capacity Management: Not Supported 00:21:31.907 Delete Endurance Group: Not Supported 00:21:31.907 Delete NVM Set: Not Supported 00:21:31.907 Extended LBA Formats Supported: Supported 00:21:31.907 Flexible Data Placement Supported: Not Supported 00:21:31.907 00:21:31.907 Controller Memory Buffer Support 00:21:31.907 ================================ 00:21:31.907 Supported: No 00:21:31.907 00:21:31.907 Persistent Memory Region Support 00:21:31.907 ================================ 00:21:31.907 Supported: No 00:21:31.907 00:21:31.907 Admin Command Set Attributes 00:21:31.907 ============================ 00:21:31.907 Security Send/Receive: Not Supported 00:21:31.907 Format NVM: Supported 00:21:31.907 Firmware Activate/Download: Not Supported 00:21:31.907 Namespace Management: Supported 00:21:31.907 Device Self-Test: Not Supported 00:21:31.907 Directives: Supported 00:21:31.907 NVMe-MI: Not Supported 00:21:31.907 Virtualization Management: Not Supported 00:21:31.907 Doorbell Buffer Config: Supported 00:21:31.907 Get LBA Status Capability: Not Supported 00:21:31.907 Command & Feature Lockdown Capability: Not Supported 00:21:31.907 Abort Command Limit: 4 00:21:31.907 Async Event Request Limit: 4 00:21:31.907 Number of Firmware Slots: N/A 00:21:31.907 Firmware Slot 1 Read-Only: N/A 00:21:31.907 Firmware Activation Without Reset: N/A 00:21:31.907 Multiple Update Detection Support: N/A 00:21:31.907 Firmware Update Granularity: No Information Provided 00:21:31.907 Per-Namespace SMART Log: Yes 00:21:31.907 Asymmetric Namespace Access Log Page: Not Supported 00:21:31.907 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:21:31.907 Command Effects Log Page: Supported 00:21:31.907 Get Log Page Extended Data: Supported 00:21:31.907 Telemetry Log Pages: Not Supported 00:21:31.907 Persistent Event Log Pages: Not Supported 00:21:31.907 Supported Log Pages Log Page: May Support 00:21:31.907 Commands Supported & Effects Log Page: Not Supported 00:21:31.907 Feature Identifiers & Effects Log Page:May Support 00:21:31.907 NVMe-MI Commands & Effects Log Page: May Support 00:21:31.907 Data Area 4 for Telemetry Log: Not Supported 00:21:31.907 Error Log Page Entries Supported: 1 00:21:31.907 Keep Alive: Not Supported 00:21:31.907 00:21:31.907 NVM Command Set Attributes 00:21:31.907 ========================== 00:21:31.907 Submission Queue Entry Size 00:21:31.907 Max: 64 00:21:31.907 Min: 64 00:21:31.907 Completion Queue Entry Size 00:21:31.907 Max: 16 00:21:31.907 Min: 16 00:21:31.907 Number of Namespaces: 256 00:21:31.907 Compare Command: Supported 00:21:31.907 Write Uncorrectable Command: Not Supported 00:21:31.907 Dataset Management Command: Supported 00:21:31.907 Write Zeroes Command: Supported 00:21:31.907 Set Features Save Field: Supported 00:21:31.907 Reservations: Not Supported 00:21:31.907 Timestamp: Supported 00:21:31.907 Copy: Supported 00:21:31.907 Volatile Write Cache: Present 00:21:31.907 Atomic Write Unit (Normal): 1 00:21:31.907 Atomic Write Unit (PFail): 1 00:21:31.907 Atomic Compare & Write Unit: 1 00:21:31.907 Fused Compare & Write: Not Supported 00:21:31.907 Scatter-Gather List 00:21:31.907 SGL Command Set: Supported 00:21:31.907 SGL Keyed: Not Supported 00:21:31.907 SGL Bit Bucket Descriptor: Not Supported 00:21:31.907 SGL Metadata Pointer: Not Supported 00:21:31.907 Oversized SGL: Not Supported 00:21:31.907 SGL Metadata Address: Not Supported 00:21:31.907 SGL Offset: Not Supported 00:21:31.907 Transport SGL Data Block: Not Supported 00:21:31.907 Replay Protected Memory Block: Not Supported 00:21:31.907 00:21:31.907 Firmware Slot Information 00:21:31.907 ========================= 00:21:31.907 Active slot: 1 00:21:31.907 Slot 1 Firmware Revision: 1.0 00:21:31.907 00:21:31.907 00:21:31.907 Commands Supported and Effects 00:21:31.907 ============================== 00:21:31.907 Admin Commands 00:21:31.907 -------------- 00:21:31.907 Delete I/O Submission Queue (00h): Supported 00:21:31.907 Create I/O Submission Queue (01h): Supported 00:21:31.907 Get Log Page (02h): Supported 00:21:31.907 Delete I/O Completion Queue (04h): Supported 00:21:31.907 Create I/O Completion Queue (05h): Supported 00:21:31.907 Identify (06h): Supported 00:21:31.907 Abort (08h): Supported 00:21:31.907 Set Features (09h): Supported 00:21:31.907 Get Features (0Ah): Supported 00:21:31.907 Asynchronous Event Request (0Ch): Supported 00:21:31.907 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:31.907 Directive Send (19h): Supported 00:21:31.907 Directive Receive (1Ah): Supported 00:21:31.907 Virtualization Management (1Ch): Supported 00:21:31.907 Doorbell Buffer Config (7Ch): Supported 00:21:31.907 Format NVM (80h): Supported LBA-Change 00:21:31.907 I/O Commands 00:21:31.907 ------------ 00:21:31.907 Flush (00h): Supported LBA-Change 00:21:31.907 Write (01h): Supported LBA-Change 00:21:31.907 Read (02h): Supported 00:21:31.907 Compare (05h): Supported 00:21:31.907 Write Zeroes (08h): Supported LBA-Change 00:21:31.907 Dataset Management (09h): Supported LBA-Change 00:21:31.907 Unknown (0Ch): Supported 00:21:31.907 Unknown (12h): Supported 00:21:31.907 Copy (19h): Supported LBA-Change 00:21:31.907 Unknown (1Dh): Supported LBA-Change 00:21:31.907 00:21:31.907 Error Log 00:21:31.907 ========= 00:21:31.907 00:21:31.907 Arbitration 00:21:31.907 =========== 00:21:31.907 Arbitration Burst: no limit 00:21:31.907 00:21:31.907 Power Management 00:21:31.907 ================ 00:21:31.907 Number of Power States: 1 00:21:31.907 Current Power State: Power State #0 00:21:31.907 Power State #0: 00:21:31.907 Max Power: 25.00 W 00:21:31.907 Non-Operational State: Operational 00:21:31.907 Entry Latency: 16 microseconds 00:21:31.907 Exit Latency: 4 microseconds 00:21:31.907 Relative Read Throughput: 0 00:21:31.907 Relative Read Latency: 0 00:21:31.907 Relative Write Throughput: 0 00:21:31.907 Relative Write Latency: 0 00:21:31.907 Idle Power: Not Reported 00:21:31.907 Active Power: Not Reported 00:21:31.907 Non-Operational Permissive Mode: Not Supported 00:21:31.907 00:21:31.907 Health Information 00:21:31.907 ================== 00:21:31.907 Critical Warnings: 00:21:31.907 Available Spare Space: OK 00:21:31.907 Temperature: OK 00:21:31.907 Device Reliability: OK 00:21:31.907 Read Only: No 00:21:31.907 Volatile Memory Backup: OK 00:21:31.907 Current Temperature: 323 Kelvin (50 Celsius) 00:21:31.907 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:31.907 Available Spare: 0% 00:21:31.907 Available Spare Threshold: 0% 00:21:31.907 Life Percentage Used: 0% 00:21:31.907 Data Units Read: 2259 00:21:31.907 Data Units Written: 1940 00:21:31.907 Host Read Commands: 99654 00:21:31.908 Host Write Commands: 95424 00:21:31.908 Controller Busy Time: 0 minutes 00:21:31.908 Power Cycles: 0 00:21:31.908 Power On Hours: 0 hours 00:21:31.908 Unsafe Shutdowns: 0 00:21:31.908 Unrecoverable Media Errors: 0 00:21:31.908 Lifetime Error Log Entries: 0 00:21:31.908 Warning Temperature Time: 0 minutes 00:21:31.908 Critical Temperature Time: 0 minutes 00:21:31.908 00:21:31.908 Number of Queues 00:21:31.908 ================ 00:21:31.908 Number of I/O Submission Queues: 64 00:21:31.908 Number of I/O Completion Queues: 64 00:21:31.908 00:21:31.908 ZNS Specific Controller Data 00:21:31.908 ============================ 00:21:31.908 Zone Append Size Limit: 0 00:21:31.908 00:21:31.908 00:21:31.908 Active Namespaces 00:21:31.908 ================= 00:21:31.908 Namespace ID:1 00:21:31.908 Error Recovery Timeout: Unlimited 00:21:31.908 Command Set Identifier: NVM (00h) 00:21:31.908 Deallocate: Supported 00:21:31.908 Deallocated/Unwritten Error: Supported 00:21:31.908 Deallocated Read Value: All 0x00 00:21:31.908 Deallocate in Write Zeroes: Not Supported 00:21:31.908 Deallocated Guard Field: 0xFFFF 00:21:31.908 Flush: Supported 00:21:31.908 Reservation: Not Supported 00:21:31.908 Namespace Sharing Capabilities: Private 00:21:31.908 Size (in LBAs): 1048576 (4GiB) 00:21:31.908 Capacity (in LBAs): 1048576 (4GiB) 00:21:31.908 Utilization (in LBAs): 1048576 (4GiB) 00:21:31.908 Thin Provisioning: Not Supported 00:21:31.908 Per-NS Atomic Units: No 00:21:31.908 Maximum Single Source Range Length: 128 00:21:31.908 Maximum Copy Length: 128 00:21:31.908 Maximum Source Range Count: 128 00:21:31.908 NGUID/EUI64 Never Reused: No 00:21:31.908 Namespace Write Protected: No 00:21:31.908 Number of LBA Formats: 8 00:21:31.908 Current LBA Format: LBA Format #04 00:21:31.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:31.908 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:31.908 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:31.908 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:31.908 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:31.908 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:31.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:31.908 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:31.908 00:21:31.908 Namespace ID:2 00:21:31.908 Error Recovery Timeout: Unlimited 00:21:31.908 Command Set Identifier: NVM (00h) 00:21:31.908 Deallocate: Supported 00:21:31.908 Deallocated/Unwritten Error: Supported 00:21:31.908 Deallocated Read Value: All 0x00 00:21:31.908 Deallocate in Write Zeroes: Not Supported 00:21:31.908 Deallocated Guard Field: 0xFFFF 00:21:31.908 Flush: Supported 00:21:31.908 Reservation: Not Supported 00:21:31.908 Namespace Sharing Capabilities: Private 00:21:31.908 Size (in LBAs): 1048576 (4GiB) 00:21:31.908 Capacity (in LBAs): 1048576 (4GiB) 00:21:31.908 Utilization (in LBAs): 1048576 (4GiB) 00:21:31.908 Thin Provisioning: Not Supported 00:21:31.908 Per-NS Atomic Units: No 00:21:31.908 Maximum Single Source Range Length: 128 00:21:31.908 Maximum Copy Length: 128 00:21:31.908 Maximum Source Range Count: 128 00:21:31.908 NGUID/EUI64 Never Reused: No 00:21:31.908 Namespace Write Protected: No 00:21:31.908 Number of LBA Formats: 8 00:21:31.908 Current LBA Format: LBA Format #04 00:21:31.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:31.908 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:31.908 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:31.908 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:31.908 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:31.908 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:31.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:31.908 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:31.908 00:21:31.908 Namespace ID:3 00:21:31.908 Error Recovery Timeout: Unlimited 00:21:31.908 Command Set Identifier: NVM (00h) 00:21:31.908 Deallocate: Supported 00:21:31.908 Deallocated/Unwritten Error: Supported 00:21:31.908 Deallocated Read Value: All 0x00 00:21:31.908 Deallocate in Write Zeroes: Not Supported 00:21:31.908 Deallocated Guard Field: 0xFFFF 00:21:31.908 Flush: Supported 00:21:31.908 Reservation: Not Supported 00:21:31.908 Namespace Sharing Capabilities: Private 00:21:31.908 Size (in LBAs): 1048576 (4GiB) 00:21:31.908 Capacity (in LBAs): 1048576 (4GiB) 00:21:31.908 Utilization (in LBAs): 1048576 (4GiB) 00:21:31.908 Thin Provisioning: Not Supported 00:21:31.908 Per-NS Atomic Units: No 00:21:31.908 Maximum Single Source Range Length: 128 00:21:31.908 Maximum Copy Length: 128 00:21:31.908 Maximum Source Range Count: 128 00:21:31.908 NGUID/EUI64 Never Reused: No 00:21:31.908 Namespace Write Protected: No 00:21:31.908 Number of LBA Formats: 8 00:21:31.908 Current LBA Format: LBA Format #04 00:21:31.908 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:31.908 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:31.908 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:31.908 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:31.908 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:31.908 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:31.908 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:31.908 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:31.908 00:21:31.908 14:40:40 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:21:31.908 14:40:40 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:21:32.168 ===================================================== 00:21:32.168 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:32.168 ===================================================== 00:21:32.168 Controller Capabilities/Features 00:21:32.168 ================================ 00:21:32.168 Vendor ID: 1b36 00:21:32.168 Subsystem Vendor ID: 1af4 00:21:32.168 Serial Number: 12343 00:21:32.168 Model Number: QEMU NVMe Ctrl 00:21:32.168 Firmware Version: 8.0.0 00:21:32.168 Recommended Arb Burst: 6 00:21:32.168 IEEE OUI Identifier: 00 54 52 00:21:32.168 Multi-path I/O 00:21:32.168 May have multiple subsystem ports: No 00:21:32.168 May have multiple controllers: Yes 00:21:32.168 Associated with SR-IOV VF: No 00:21:32.168 Max Data Transfer Size: 524288 00:21:32.168 Max Number of Namespaces: 256 00:21:32.168 Max Number of I/O Queues: 64 00:21:32.168 NVMe Specification Version (VS): 1.4 00:21:32.168 NVMe Specification Version (Identify): 1.4 00:21:32.168 Maximum Queue Entries: 2048 00:21:32.168 Contiguous Queues Required: Yes 00:21:32.168 Arbitration Mechanisms Supported 00:21:32.168 Weighted Round Robin: Not Supported 00:21:32.168 Vendor Specific: Not Supported 00:21:32.168 Reset Timeout: 7500 ms 00:21:32.168 Doorbell Stride: 4 bytes 00:21:32.168 NVM Subsystem Reset: Not Supported 00:21:32.168 Command Sets Supported 00:21:32.168 NVM Command Set: Supported 00:21:32.168 Boot Partition: Not Supported 00:21:32.168 Memory Page Size Minimum: 4096 bytes 00:21:32.168 Memory Page Size Maximum: 65536 bytes 00:21:32.168 Persistent Memory Region: Not Supported 00:21:32.168 Optional Asynchronous Events Supported 00:21:32.168 Namespace Attribute Notices: Supported 00:21:32.168 Firmware Activation Notices: Not Supported 00:21:32.168 ANA Change Notices: Not Supported 00:21:32.168 PLE Aggregate Log Change Notices: Not Supported 00:21:32.168 LBA Status Info Alert Notices: Not Supported 00:21:32.168 EGE Aggregate Log Change Notices: Not Supported 00:21:32.168 Normal NVM Subsystem Shutdown event: Not Supported 00:21:32.168 Zone Descriptor Change Notices: Not Supported 00:21:32.168 Discovery Log Change Notices: Not Supported 00:21:32.168 Controller Attributes 00:21:32.168 128-bit Host Identifier: Not Supported 00:21:32.168 Non-Operational Permissive Mode: Not Supported 00:21:32.168 NVM Sets: Not Supported 00:21:32.168 Read Recovery Levels: Not Supported 00:21:32.168 Endurance Groups: Supported 00:21:32.168 Predictable Latency Mode: Not Supported 00:21:32.168 Traffic Based Keep ALive: Not Supported 00:21:32.168 Namespace Granularity: Not Supported 00:21:32.168 SQ Associations: Not Supported 00:21:32.168 UUID List: Not Supported 00:21:32.168 Multi-Domain Subsystem: Not Supported 00:21:32.168 Fixed Capacity Management: Not Supported 00:21:32.168 Variable Capacity Management: Not Supported 00:21:32.168 Delete Endurance Group: Not Supported 00:21:32.168 Delete NVM Set: Not Supported 00:21:32.168 Extended LBA Formats Supported: Supported 00:21:32.168 Flexible Data Placement Supported: Supported 00:21:32.168 00:21:32.168 Controller Memory Buffer Support 00:21:32.168 ================================ 00:21:32.168 Supported: No 00:21:32.168 00:21:32.168 Persistent Memory Region Support 00:21:32.168 ================================ 00:21:32.168 Supported: No 00:21:32.168 00:21:32.168 Admin Command Set Attributes 00:21:32.168 ============================ 00:21:32.168 Security Send/Receive: Not Supported 00:21:32.168 Format NVM: Supported 00:21:32.168 Firmware Activate/Download: Not Supported 00:21:32.168 Namespace Management: Supported 00:21:32.168 Device Self-Test: Not Supported 00:21:32.168 Directives: Supported 00:21:32.168 NVMe-MI: Not Supported 00:21:32.168 Virtualization Management: Not Supported 00:21:32.168 Doorbell Buffer Config: Supported 00:21:32.168 Get LBA Status Capability: Not Supported 00:21:32.168 Command & Feature Lockdown Capability: Not Supported 00:21:32.168 Abort Command Limit: 4 00:21:32.168 Async Event Request Limit: 4 00:21:32.168 Number of Firmware Slots: N/A 00:21:32.168 Firmware Slot 1 Read-Only: N/A 00:21:32.168 Firmware Activation Without Reset: N/A 00:21:32.168 Multiple Update Detection Support: N/A 00:21:32.168 Firmware Update Granularity: No Information Provided 00:21:32.168 Per-Namespace SMART Log: Yes 00:21:32.168 Asymmetric Namespace Access Log Page: Not Supported 00:21:32.168 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:21:32.168 Command Effects Log Page: Supported 00:21:32.168 Get Log Page Extended Data: Supported 00:21:32.168 Telemetry Log Pages: Not Supported 00:21:32.168 Persistent Event Log Pages: Not Supported 00:21:32.168 Supported Log Pages Log Page: May Support 00:21:32.168 Commands Supported & Effects Log Page: Not Supported 00:21:32.168 Feature Identifiers & Effects Log Page:May Support 00:21:32.168 NVMe-MI Commands & Effects Log Page: May Support 00:21:32.168 Data Area 4 for Telemetry Log: Not Supported 00:21:32.168 Error Log Page Entries Supported: 1 00:21:32.168 Keep Alive: Not Supported 00:21:32.168 00:21:32.168 NVM Command Set Attributes 00:21:32.168 ========================== 00:21:32.168 Submission Queue Entry Size 00:21:32.168 Max: 64 00:21:32.168 Min: 64 00:21:32.168 Completion Queue Entry Size 00:21:32.168 Max: 16 00:21:32.168 Min: 16 00:21:32.168 Number of Namespaces: 256 00:21:32.168 Compare Command: Supported 00:21:32.168 Write Uncorrectable Command: Not Supported 00:21:32.168 Dataset Management Command: Supported 00:21:32.168 Write Zeroes Command: Supported 00:21:32.168 Set Features Save Field: Supported 00:21:32.168 Reservations: Not Supported 00:21:32.168 Timestamp: Supported 00:21:32.168 Copy: Supported 00:21:32.168 Volatile Write Cache: Present 00:21:32.168 Atomic Write Unit (Normal): 1 00:21:32.168 Atomic Write Unit (PFail): 1 00:21:32.168 Atomic Compare & Write Unit: 1 00:21:32.168 Fused Compare & Write: Not Supported 00:21:32.168 Scatter-Gather List 00:21:32.168 SGL Command Set: Supported 00:21:32.168 SGL Keyed: Not Supported 00:21:32.168 SGL Bit Bucket Descriptor: Not Supported 00:21:32.168 SGL Metadata Pointer: Not Supported 00:21:32.168 Oversized SGL: Not Supported 00:21:32.168 SGL Metadata Address: Not Supported 00:21:32.168 SGL Offset: Not Supported 00:21:32.168 Transport SGL Data Block: Not Supported 00:21:32.168 Replay Protected Memory Block: Not Supported 00:21:32.168 00:21:32.168 Firmware Slot Information 00:21:32.168 ========================= 00:21:32.168 Active slot: 1 00:21:32.168 Slot 1 Firmware Revision: 1.0 00:21:32.168 00:21:32.168 00:21:32.168 Commands Supported and Effects 00:21:32.168 ============================== 00:21:32.168 Admin Commands 00:21:32.168 -------------- 00:21:32.168 Delete I/O Submission Queue (00h): Supported 00:21:32.168 Create I/O Submission Queue (01h): Supported 00:21:32.168 Get Log Page (02h): Supported 00:21:32.168 Delete I/O Completion Queue (04h): Supported 00:21:32.168 Create I/O Completion Queue (05h): Supported 00:21:32.168 Identify (06h): Supported 00:21:32.168 Abort (08h): Supported 00:21:32.168 Set Features (09h): Supported 00:21:32.168 Get Features (0Ah): Supported 00:21:32.168 Asynchronous Event Request (0Ch): Supported 00:21:32.168 Namespace Attachment (15h): Supported NS-Inventory-Change 00:21:32.168 Directive Send (19h): Supported 00:21:32.168 Directive Receive (1Ah): Supported 00:21:32.168 Virtualization Management (1Ch): Supported 00:21:32.168 Doorbell Buffer Config (7Ch): Supported 00:21:32.168 Format NVM (80h): Supported LBA-Change 00:21:32.168 I/O Commands 00:21:32.168 ------------ 00:21:32.168 Flush (00h): Supported LBA-Change 00:21:32.168 Write (01h): Supported LBA-Change 00:21:32.168 Read (02h): Supported 00:21:32.168 Compare (05h): Supported 00:21:32.168 Write Zeroes (08h): Supported LBA-Change 00:21:32.168 Dataset Management (09h): Supported LBA-Change 00:21:32.168 Unknown (0Ch): Supported 00:21:32.168 Unknown (12h): Supported 00:21:32.168 Copy (19h): Supported LBA-Change 00:21:32.168 Unknown (1Dh): Supported LBA-Change 00:21:32.168 00:21:32.168 Error Log 00:21:32.168 ========= 00:21:32.168 00:21:32.168 Arbitration 00:21:32.168 =========== 00:21:32.168 Arbitration Burst: no limit 00:21:32.168 00:21:32.168 Power Management 00:21:32.169 ================ 00:21:32.169 Number of Power States: 1 00:21:32.169 Current Power State: Power State #0 00:21:32.169 Power State #0: 00:21:32.169 Max Power: 25.00 W 00:21:32.169 Non-Operational State: Operational 00:21:32.169 Entry Latency: 16 microseconds 00:21:32.169 Exit Latency: 4 microseconds 00:21:32.169 Relative Read Throughput: 0 00:21:32.169 Relative Read Latency: 0 00:21:32.169 Relative Write Throughput: 0 00:21:32.169 Relative Write Latency: 0 00:21:32.169 Idle Power: Not Reported 00:21:32.169 Active Power: Not Reported 00:21:32.169 Non-Operational Permissive Mode: Not Supported 00:21:32.169 00:21:32.169 Health Information 00:21:32.169 ================== 00:21:32.169 Critical Warnings: 00:21:32.169 Available Spare Space: OK 00:21:32.169 Temperature: OK 00:21:32.169 Device Reliability: OK 00:21:32.169 Read Only: No 00:21:32.169 Volatile Memory Backup: OK 00:21:32.169 Current Temperature: 323 Kelvin (50 Celsius) 00:21:32.169 Temperature Threshold: 343 Kelvin (70 Celsius) 00:21:32.169 Available Spare: 0% 00:21:32.169 Available Spare Threshold: 0% 00:21:32.169 Life Percentage Used: 0% 00:21:32.169 Data Units Read: 782 00:21:32.169 Data Units Written: 718 00:21:32.169 Host Read Commands: 32911 00:21:32.169 Host Write Commands: 32481 00:21:32.169 Controller Busy Time: 0 minutes 00:21:32.169 Power Cycles: 0 00:21:32.169 Power On Hours: 0 hours 00:21:32.169 Unsafe Shutdowns: 0 00:21:32.169 Unrecoverable Media Errors: 0 00:21:32.169 Lifetime Error Log Entries: 0 00:21:32.169 Warning Temperature Time: 0 minutes 00:21:32.169 Critical Temperature Time: 0 minutes 00:21:32.169 00:21:32.169 Number of Queues 00:21:32.169 ================ 00:21:32.169 Number of I/O Submission Queues: 64 00:21:32.169 Number of I/O Completion Queues: 64 00:21:32.169 00:21:32.169 ZNS Specific Controller Data 00:21:32.169 ============================ 00:21:32.169 Zone Append Size Limit: 0 00:21:32.169 00:21:32.169 00:21:32.169 Active Namespaces 00:21:32.169 ================= 00:21:32.169 Namespace ID:1 00:21:32.169 Error Recovery Timeout: Unlimited 00:21:32.169 Command Set Identifier: NVM (00h) 00:21:32.169 Deallocate: Supported 00:21:32.169 Deallocated/Unwritten Error: Supported 00:21:32.169 Deallocated Read Value: All 0x00 00:21:32.169 Deallocate in Write Zeroes: Not Supported 00:21:32.169 Deallocated Guard Field: 0xFFFF 00:21:32.169 Flush: Supported 00:21:32.169 Reservation: Not Supported 00:21:32.169 Namespace Sharing Capabilities: Multiple Controllers 00:21:32.169 Size (in LBAs): 262144 (1GiB) 00:21:32.169 Capacity (in LBAs): 262144 (1GiB) 00:21:32.169 Utilization (in LBAs): 262144 (1GiB) 00:21:32.169 Thin Provisioning: Not Supported 00:21:32.169 Per-NS Atomic Units: No 00:21:32.169 Maximum Single Source Range Length: 128 00:21:32.169 Maximum Copy Length: 128 00:21:32.169 Maximum Source Range Count: 128 00:21:32.169 NGUID/EUI64 Never Reused: No 00:21:32.169 Namespace Write Protected: No 00:21:32.169 Endurance group ID: 1 00:21:32.169 Number of LBA Formats: 8 00:21:32.169 Current LBA Format: LBA Format #04 00:21:32.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:32.169 LBA Format #01: Data Size: 512 Metadata Size: 8 00:21:32.169 LBA Format #02: Data Size: 512 Metadata Size: 16 00:21:32.169 LBA Format #03: Data Size: 512 Metadata Size: 64 00:21:32.169 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:21:32.169 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:21:32.169 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:21:32.169 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:21:32.169 00:21:32.169 Get Feature FDP: 00:21:32.169 ================ 00:21:32.169 Enabled: Yes 00:21:32.169 FDP configuration index: 0 00:21:32.169 00:21:32.169 FDP configurations log page 00:21:32.169 =========================== 00:21:32.169 Number of FDP configurations: 1 00:21:32.169 Version: 0 00:21:32.169 Size: 112 00:21:32.169 FDP Configuration Descriptor: 0 00:21:32.169 Descriptor Size: 96 00:21:32.169 Reclaim Group Identifier format: 2 00:21:32.169 FDP Volatile Write Cache: Not Present 00:21:32.169 FDP Configuration: Valid 00:21:32.169 Vendor Specific Size: 0 00:21:32.169 Number of Reclaim Groups: 2 00:21:32.169 Number of Recalim Unit Handles: 8 00:21:32.169 Max Placement Identifiers: 128 00:21:32.169 Number of Namespaces Suppprted: 256 00:21:32.169 Reclaim unit Nominal Size: 6000000 bytes 00:21:32.169 Estimated Reclaim Unit Time Limit: Not Reported 00:21:32.169 RUH Desc #000: RUH Type: Initially Isolated 00:21:32.169 RUH Desc #001: RUH Type: Initially Isolated 00:21:32.169 RUH Desc #002: RUH Type: Initially Isolated 00:21:32.169 RUH Desc #003: RUH Type: Initially Isolated 00:21:32.169 RUH Desc #004: RUH Type: Initially Isolated 00:21:32.169 RUH Desc #005: RUH Type: Initially Isolated 00:21:32.169 RUH Desc #006: RUH Type: Initially Isolated 00:21:32.169 RUH Desc #007: RUH Type: Initially Isolated 00:21:32.169 00:21:32.169 FDP reclaim unit handle usage log page 00:21:32.428 ====================================== 00:21:32.428 Number of Reclaim Unit Handles: 8 00:21:32.428 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:21:32.428 RUH Usage Desc #001: RUH Attributes: Unused 00:21:32.428 RUH Usage Desc #002: RUH Attributes: Unused 00:21:32.428 RUH Usage Desc #003: RUH Attributes: Unused 00:21:32.428 RUH Usage Desc #004: RUH Attributes: Unused 00:21:32.428 RUH Usage Desc #005: RUH Attributes: Unused 00:21:32.428 RUH Usage Desc #006: RUH Attributes: Unused 00:21:32.428 RUH Usage Desc #007: RUH Attributes: Unused 00:21:32.428 00:21:32.428 FDP statistics log page 00:21:32.428 ======================= 00:21:32.428 Host bytes with metadata written: 445030400 00:21:32.428 Media bytes with metadata written: 445095936 00:21:32.428 Media bytes erased: 0 00:21:32.428 00:21:32.428 FDP events log page 00:21:32.428 =================== 00:21:32.428 Number of FDP events: 0 00:21:32.428 00:21:32.428 ************************************ 00:21:32.428 END TEST nvme_identify 00:21:32.428 ************************************ 00:21:32.428 00:21:32.428 real 0m1.998s 00:21:32.428 user 0m0.721s 00:21:32.428 sys 0m1.037s 00:21:32.428 14:40:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:32.428 14:40:40 -- common/autotest_common.sh@10 -- # set +x 00:21:32.428 14:40:40 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:21:32.428 14:40:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:32.428 14:40:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:32.428 14:40:40 -- common/autotest_common.sh@10 -- # set +x 00:21:32.428 ************************************ 00:21:32.428 START TEST nvme_perf 00:21:32.428 ************************************ 00:21:32.428 14:40:40 -- common/autotest_common.sh@1111 -- # nvme_perf 00:21:32.428 14:40:40 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:21:33.824 Initializing NVMe Controllers 00:21:33.824 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:33.824 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:33.824 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:33.824 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:33.824 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:33.824 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:21:33.824 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:21:33.824 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:21:33.824 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:21:33.824 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:21:33.824 Initialization complete. Launching workers. 00:21:33.824 ======================================================== 00:21:33.824 Latency(us) 00:21:33.824 Device Information : IOPS MiB/s Average min max 00:21:33.824 PCIE (0000:00:10.0) NSID 1 from core 0: 11538.91 135.22 11120.74 8167.88 48827.42 00:21:33.824 PCIE (0000:00:11.0) NSID 1 from core 0: 11538.91 135.22 11096.81 8206.64 45914.43 00:21:33.824 PCIE (0000:00:13.0) NSID 1 from core 0: 11538.91 135.22 11072.35 8227.11 43781.36 00:21:33.824 PCIE (0000:00:12.0) NSID 1 from core 0: 11538.91 135.22 11046.26 8256.90 41055.27 00:21:33.824 PCIE (0000:00:12.0) NSID 2 from core 0: 11538.91 135.22 11018.76 8242.04 37933.25 00:21:33.824 PCIE (0000:00:12.0) NSID 3 from core 0: 11602.66 135.97 10934.02 8257.71 30230.31 00:21:33.824 ======================================================== 00:21:33.824 Total : 69297.19 812.08 11048.05 8167.88 48827.42 00:21:33.824 00:21:33.824 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:33.824 ================================================================================= 00:21:33.824 1.00000% : 8426.057us 00:21:33.824 10.00000% : 8987.794us 00:21:33.824 25.00000% : 9674.362us 00:21:33.824 50.00000% : 10610.590us 00:21:33.824 75.00000% : 11359.573us 00:21:33.824 90.00000% : 12857.539us 00:21:33.824 95.00000% : 14480.335us 00:21:33.824 98.00000% : 18100.419us 00:21:33.824 99.00000% : 40195.413us 00:21:33.824 99.50000% : 46936.259us 00:21:33.824 99.90000% : 48434.225us 00:21:33.824 99.99000% : 48933.547us 00:21:33.824 99.99900% : 48933.547us 00:21:33.824 99.99990% : 48933.547us 00:21:33.824 99.99999% : 48933.547us 00:21:33.824 00:21:33.824 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:21:33.824 ================================================================================= 00:21:33.824 1.00000% : 8550.888us 00:21:33.824 10.00000% : 9050.210us 00:21:33.824 25.00000% : 9674.362us 00:21:33.824 50.00000% : 10610.590us 00:21:33.824 75.00000% : 11297.158us 00:21:33.824 90.00000% : 12857.539us 00:21:33.824 95.00000% : 14542.750us 00:21:33.824 98.00000% : 18100.419us 00:21:33.824 99.00000% : 37449.143us 00:21:33.824 99.50000% : 43940.328us 00:21:33.824 99.90000% : 45687.954us 00:21:33.824 99.99000% : 45937.615us 00:21:33.824 99.99900% : 45937.615us 00:21:33.824 99.99990% : 45937.615us 00:21:33.824 99.99999% : 45937.615us 00:21:33.824 00:21:33.824 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:21:33.824 ================================================================================= 00:21:33.824 1.00000% : 8488.472us 00:21:33.824 10.00000% : 9050.210us 00:21:33.824 25.00000% : 9674.362us 00:21:33.824 50.00000% : 10610.590us 00:21:33.824 75.00000% : 11297.158us 00:21:33.824 90.00000% : 12857.539us 00:21:33.824 95.00000% : 14480.335us 00:21:33.824 98.00000% : 18474.910us 00:21:33.824 99.00000% : 35451.855us 00:21:33.824 99.50000% : 41943.040us 00:21:33.824 99.90000% : 43441.006us 00:21:33.824 99.99000% : 43940.328us 00:21:33.824 99.99900% : 43940.328us 00:21:33.824 99.99990% : 43940.328us 00:21:33.824 99.99999% : 43940.328us 00:21:33.824 00:21:33.824 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:21:33.824 ================================================================================= 00:21:33.824 1.00000% : 8488.472us 00:21:33.824 10.00000% : 9050.210us 00:21:33.824 25.00000% : 9611.947us 00:21:33.824 50.00000% : 10610.590us 00:21:33.824 75.00000% : 11297.158us 00:21:33.824 90.00000% : 12795.124us 00:21:33.824 95.00000% : 14605.166us 00:21:33.824 98.00000% : 18724.571us 00:21:33.824 99.00000% : 32206.263us 00:21:33.824 99.50000% : 39196.770us 00:21:33.824 99.90000% : 40694.735us 00:21:33.824 99.99000% : 41194.057us 00:21:33.824 99.99900% : 41194.057us 00:21:33.824 99.99990% : 41194.057us 00:21:33.824 99.99999% : 41194.057us 00:21:33.824 00:21:33.824 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:21:33.824 ================================================================================= 00:21:33.824 1.00000% : 8488.472us 00:21:33.824 10.00000% : 8987.794us 00:21:33.824 25.00000% : 9674.362us 00:21:33.824 50.00000% : 10610.590us 00:21:33.824 75.00000% : 11359.573us 00:21:33.824 90.00000% : 12732.709us 00:21:33.824 95.00000% : 15229.318us 00:21:33.824 98.00000% : 17351.436us 00:21:33.824 99.00000% : 29459.992us 00:21:33.824 99.50000% : 35951.177us 00:21:33.824 99.90000% : 37698.804us 00:21:33.824 99.99000% : 37948.465us 00:21:33.824 99.99900% : 37948.465us 00:21:33.824 99.99990% : 37948.465us 00:21:33.824 99.99999% : 37948.465us 00:21:33.824 00:21:33.825 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:21:33.825 ================================================================================= 00:21:33.825 1.00000% : 8488.472us 00:21:33.825 10.00000% : 9050.210us 00:21:33.825 25.00000% : 9674.362us 00:21:33.825 50.00000% : 10610.590us 00:21:33.825 75.00000% : 11359.573us 00:21:33.825 90.00000% : 12795.124us 00:21:33.825 95.00000% : 14979.657us 00:21:33.825 98.00000% : 17850.758us 00:21:33.825 99.00000% : 21845.333us 00:21:33.825 99.50000% : 28336.518us 00:21:33.825 99.90000% : 29959.314us 00:21:33.825 99.99000% : 30208.975us 00:21:33.825 99.99900% : 30333.806us 00:21:33.825 99.99990% : 30333.806us 00:21:33.825 99.99999% : 30333.806us 00:21:33.825 00:21:33.825 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:33.825 ============================================================================== 00:21:33.825 Range in us Cumulative IO count 00:21:33.825 8113.981 - 8176.396: 0.0173% ( 2) 00:21:33.825 8176.396 - 8238.811: 0.1122% ( 11) 00:21:33.825 8238.811 - 8301.227: 0.3108% ( 23) 00:21:33.825 8301.227 - 8363.642: 0.6043% ( 34) 00:21:33.825 8363.642 - 8426.057: 1.0963% ( 57) 00:21:33.825 8426.057 - 8488.472: 1.6920% ( 69) 00:21:33.825 8488.472 - 8550.888: 2.4258% ( 85) 00:21:33.825 8550.888 - 8613.303: 3.1941% ( 89) 00:21:33.825 8613.303 - 8675.718: 4.1436% ( 110) 00:21:33.825 8675.718 - 8738.133: 5.1968% ( 122) 00:21:33.825 8738.133 - 8800.549: 6.3536% ( 134) 00:21:33.825 8800.549 - 8862.964: 7.6140% ( 146) 00:21:33.825 8862.964 - 8925.379: 8.9693% ( 157) 00:21:33.825 8925.379 - 8987.794: 10.3419% ( 159) 00:21:33.825 8987.794 - 9050.210: 11.8180% ( 171) 00:21:33.825 9050.210 - 9112.625: 13.3028% ( 172) 00:21:33.825 9112.625 - 9175.040: 14.7531% ( 168) 00:21:33.825 9175.040 - 9237.455: 16.2983% ( 179) 00:21:33.825 9237.455 - 9299.870: 17.7486% ( 168) 00:21:33.825 9299.870 - 9362.286: 19.1730% ( 165) 00:21:33.825 9362.286 - 9424.701: 20.5369% ( 158) 00:21:33.825 9424.701 - 9487.116: 21.8577% ( 153) 00:21:33.825 9487.116 - 9549.531: 23.1785% ( 153) 00:21:33.825 9549.531 - 9611.947: 24.4561% ( 148) 00:21:33.825 9611.947 - 9674.362: 25.6906% ( 143) 00:21:33.825 9674.362 - 9736.777: 27.1927% ( 174) 00:21:33.825 9736.777 - 9799.192: 28.5998% ( 163) 00:21:33.825 9799.192 - 9861.608: 30.1537% ( 180) 00:21:33.825 9861.608 - 9924.023: 31.7075% ( 180) 00:21:33.825 9924.023 - 9986.438: 33.3823% ( 194) 00:21:33.825 9986.438 - 10048.853: 34.9879% ( 186) 00:21:33.825 10048.853 - 10111.269: 36.5677% ( 183) 00:21:33.825 10111.269 - 10173.684: 38.2510% ( 195) 00:21:33.825 10173.684 - 10236.099: 40.0035% ( 203) 00:21:33.825 10236.099 - 10298.514: 41.8767% ( 217) 00:21:33.825 10298.514 - 10360.930: 43.8709% ( 231) 00:21:33.825 10360.930 - 10423.345: 45.8218% ( 226) 00:21:33.825 10423.345 - 10485.760: 47.7814% ( 227) 00:21:33.825 10485.760 - 10548.175: 49.9050% ( 246) 00:21:33.825 10548.175 - 10610.590: 51.9769% ( 240) 00:21:33.825 10610.590 - 10673.006: 54.2213% ( 260) 00:21:33.825 10673.006 - 10735.421: 56.3968% ( 252) 00:21:33.825 10735.421 - 10797.836: 58.6326% ( 259) 00:21:33.825 10797.836 - 10860.251: 60.8943% ( 262) 00:21:33.825 10860.251 - 10922.667: 63.1302% ( 259) 00:21:33.825 10922.667 - 10985.082: 65.3142% ( 253) 00:21:33.825 10985.082 - 11047.497: 67.3774% ( 239) 00:21:33.825 11047.497 - 11109.912: 69.3715% ( 231) 00:21:33.825 11109.912 - 11172.328: 71.1412% ( 205) 00:21:33.825 11172.328 - 11234.743: 72.7469% ( 186) 00:21:33.825 11234.743 - 11297.158: 74.2403% ( 173) 00:21:33.825 11297.158 - 11359.573: 75.6302% ( 161) 00:21:33.825 11359.573 - 11421.989: 76.8301% ( 139) 00:21:33.825 11421.989 - 11484.404: 77.8574% ( 119) 00:21:33.825 11484.404 - 11546.819: 78.8760% ( 118) 00:21:33.825 11546.819 - 11609.234: 79.7825% ( 105) 00:21:33.825 11609.234 - 11671.650: 80.6026% ( 95) 00:21:33.825 11671.650 - 11734.065: 81.3795% ( 90) 00:21:33.825 11734.065 - 11796.480: 82.0701% ( 80) 00:21:33.825 11796.480 - 11858.895: 82.8211% ( 87) 00:21:33.825 11858.895 - 11921.310: 83.4858% ( 77) 00:21:33.825 11921.310 - 11983.726: 84.1506% ( 77) 00:21:33.825 11983.726 - 12046.141: 84.8153% ( 77) 00:21:33.825 12046.141 - 12108.556: 85.4541% ( 74) 00:21:33.825 12108.556 - 12170.971: 85.9634% ( 59) 00:21:33.825 12170.971 - 12233.387: 86.4641% ( 58) 00:21:33.825 12233.387 - 12295.802: 86.8612% ( 46) 00:21:33.825 12295.802 - 12358.217: 87.2238% ( 42) 00:21:33.825 12358.217 - 12420.632: 87.5691% ( 40) 00:21:33.825 12420.632 - 12483.048: 87.9316% ( 42) 00:21:33.825 12483.048 - 12545.463: 88.2856% ( 41) 00:21:33.825 12545.463 - 12607.878: 88.6568% ( 43) 00:21:33.825 12607.878 - 12670.293: 89.0107% ( 41) 00:21:33.825 12670.293 - 12732.709: 89.3387% ( 38) 00:21:33.825 12732.709 - 12795.124: 89.7531% ( 48) 00:21:33.825 12795.124 - 12857.539: 90.0898% ( 39) 00:21:33.825 12857.539 - 12919.954: 90.4523% ( 42) 00:21:33.825 12919.954 - 12982.370: 90.7977% ( 40) 00:21:33.825 12982.370 - 13044.785: 91.0739% ( 32) 00:21:33.825 13044.785 - 13107.200: 91.4019% ( 38) 00:21:33.825 13107.200 - 13169.615: 91.6695% ( 31) 00:21:33.825 13169.615 - 13232.030: 91.9458% ( 32) 00:21:33.825 13232.030 - 13294.446: 92.1875% ( 28) 00:21:33.825 13294.446 - 13356.861: 92.4033% ( 25) 00:21:33.825 13356.861 - 13419.276: 92.5328% ( 15) 00:21:33.825 13419.276 - 13481.691: 92.7314% ( 23) 00:21:33.825 13481.691 - 13544.107: 92.9299% ( 23) 00:21:33.825 13544.107 - 13606.522: 93.0853% ( 18) 00:21:33.825 13606.522 - 13668.937: 93.2579% ( 20) 00:21:33.825 13668.937 - 13731.352: 93.4479% ( 22) 00:21:33.825 13731.352 - 13793.768: 93.6378% ( 22) 00:21:33.825 13793.768 - 13856.183: 93.8277% ( 22) 00:21:33.825 13856.183 - 13918.598: 94.0090% ( 21) 00:21:33.825 13918.598 - 13981.013: 94.1557% ( 17) 00:21:33.825 13981.013 - 14043.429: 94.2852% ( 15) 00:21:33.825 14043.429 - 14105.844: 94.4233% ( 16) 00:21:33.825 14105.844 - 14168.259: 94.5528% ( 15) 00:21:33.825 14168.259 - 14230.674: 94.6737% ( 14) 00:21:33.825 14230.674 - 14293.090: 94.7773% ( 12) 00:21:33.825 14293.090 - 14355.505: 94.8722% ( 11) 00:21:33.825 14355.505 - 14417.920: 94.9672% ( 11) 00:21:33.825 14417.920 - 14480.335: 95.0363% ( 8) 00:21:33.825 14480.335 - 14542.750: 95.1140% ( 9) 00:21:33.825 14542.750 - 14605.166: 95.2003% ( 10) 00:21:33.825 14605.166 - 14667.581: 95.3039% ( 12) 00:21:33.825 14667.581 - 14729.996: 95.3729% ( 8) 00:21:33.825 14729.996 - 14792.411: 95.4679% ( 11) 00:21:33.825 14792.411 - 14854.827: 95.5887% ( 14) 00:21:33.825 14854.827 - 14917.242: 95.6664% ( 9) 00:21:33.825 14917.242 - 14979.657: 95.7355% ( 8) 00:21:33.825 14979.657 - 15042.072: 95.8046% ( 8) 00:21:33.825 15042.072 - 15104.488: 95.8823% ( 9) 00:21:33.825 15104.488 - 15166.903: 95.9599% ( 9) 00:21:33.825 15166.903 - 15229.318: 96.0204% ( 7) 00:21:33.825 15229.318 - 15291.733: 96.0722% ( 6) 00:21:33.825 15291.733 - 15354.149: 96.1240% ( 6) 00:21:33.825 15354.149 - 15416.564: 96.1758% ( 6) 00:21:33.825 15416.564 - 15478.979: 96.2189% ( 5) 00:21:33.825 15478.979 - 15541.394: 96.2621% ( 5) 00:21:33.825 15541.394 - 15603.810: 96.3052% ( 5) 00:21:33.825 15603.810 - 15666.225: 96.3484% ( 5) 00:21:33.825 15666.225 - 15728.640: 96.4002% ( 6) 00:21:33.825 15728.640 - 15791.055: 96.4520% ( 6) 00:21:33.825 15791.055 - 15853.470: 96.5124% ( 7) 00:21:33.825 15853.470 - 15915.886: 96.5470% ( 4) 00:21:33.825 15915.886 - 15978.301: 96.6160% ( 8) 00:21:33.825 15978.301 - 16103.131: 96.6851% ( 8) 00:21:33.825 16227.962 - 16352.792: 96.7023% ( 2) 00:21:33.825 16352.792 - 16477.623: 96.7628% ( 7) 00:21:33.825 16477.623 - 16602.453: 96.8577% ( 11) 00:21:33.825 16602.453 - 16727.284: 96.9613% ( 12) 00:21:33.825 16727.284 - 16852.114: 97.0563% ( 11) 00:21:33.825 16852.114 - 16976.945: 97.1599% ( 12) 00:21:33.825 16976.945 - 17101.775: 97.2462% ( 10) 00:21:33.825 17101.775 - 17226.606: 97.3412% ( 11) 00:21:33.825 17226.606 - 17351.436: 97.4361% ( 11) 00:21:33.825 17351.436 - 17476.267: 97.5483% ( 13) 00:21:33.825 17476.267 - 17601.097: 97.6260% ( 9) 00:21:33.825 17601.097 - 17725.928: 97.7901% ( 19) 00:21:33.825 17725.928 - 17850.758: 97.9023% ( 13) 00:21:33.825 17850.758 - 17975.589: 97.9627% ( 7) 00:21:33.825 17975.589 - 18100.419: 98.0577% ( 11) 00:21:33.825 18100.419 - 18225.250: 98.1613% ( 12) 00:21:33.825 18225.250 - 18350.080: 98.2648% ( 12) 00:21:33.825 18350.080 - 18474.910: 98.3943% ( 15) 00:21:33.825 18474.910 - 18599.741: 98.4979% ( 12) 00:21:33.825 18599.741 - 18724.571: 98.6102% ( 13) 00:21:33.825 18724.571 - 18849.402: 98.7224% ( 13) 00:21:33.825 18849.402 - 18974.232: 98.7914% ( 8) 00:21:33.825 18974.232 - 19099.063: 98.8519% ( 7) 00:21:33.825 19099.063 - 19223.893: 98.8950% ( 5) 00:21:33.825 39446.430 - 39696.091: 98.9296% ( 4) 00:21:33.825 39696.091 - 39945.752: 98.9900% ( 7) 00:21:33.825 39945.752 - 40195.413: 99.0418% ( 6) 00:21:33.825 40195.413 - 40445.074: 99.0936% ( 6) 00:21:33.825 40445.074 - 40694.735: 99.1454% ( 6) 00:21:33.825 40694.735 - 40944.396: 99.2144% ( 8) 00:21:33.825 40944.396 - 41194.057: 99.2749% ( 7) 00:21:33.825 41194.057 - 41443.718: 99.3180% ( 5) 00:21:33.825 41443.718 - 41693.379: 99.3785% ( 7) 00:21:33.825 41693.379 - 41943.040: 99.4389% ( 7) 00:21:33.825 41943.040 - 42192.701: 99.4475% ( 1) 00:21:33.825 46436.937 - 46686.598: 99.4993% ( 6) 00:21:33.825 46686.598 - 46936.259: 99.5511% ( 6) 00:21:33.825 46936.259 - 47185.920: 99.6202% ( 8) 00:21:33.825 47185.920 - 47435.581: 99.6806% ( 7) 00:21:33.825 47435.581 - 47685.242: 99.7410% ( 7) 00:21:33.825 47685.242 - 47934.903: 99.8015% ( 7) 00:21:33.825 47934.903 - 48184.564: 99.8619% ( 7) 00:21:33.825 48184.564 - 48434.225: 99.9137% ( 6) 00:21:33.825 48434.225 - 48683.886: 99.9741% ( 7) 00:21:33.825 48683.886 - 48933.547: 100.0000% ( 3) 00:21:33.825 00:21:33.826 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:21:33.826 ============================================================================== 00:21:33.826 Range in us Cumulative IO count 00:21:33.826 8176.396 - 8238.811: 0.0173% ( 2) 00:21:33.826 8238.811 - 8301.227: 0.0863% ( 8) 00:21:33.826 8301.227 - 8363.642: 0.2590% ( 20) 00:21:33.826 8363.642 - 8426.057: 0.5611% ( 35) 00:21:33.826 8426.057 - 8488.472: 0.9582% ( 46) 00:21:33.826 8488.472 - 8550.888: 1.5884% ( 73) 00:21:33.826 8550.888 - 8613.303: 2.3308% ( 86) 00:21:33.826 8613.303 - 8675.718: 3.1854% ( 99) 00:21:33.826 8675.718 - 8738.133: 4.1868% ( 116) 00:21:33.826 8738.133 - 8800.549: 5.3349% ( 133) 00:21:33.826 8800.549 - 8862.964: 6.5694% ( 143) 00:21:33.826 8862.964 - 8925.379: 8.0110% ( 167) 00:21:33.826 8925.379 - 8987.794: 9.5563% ( 179) 00:21:33.826 8987.794 - 9050.210: 11.2483% ( 196) 00:21:33.826 9050.210 - 9112.625: 12.9403% ( 196) 00:21:33.826 9112.625 - 9175.040: 14.7186% ( 206) 00:21:33.826 9175.040 - 9237.455: 16.3933% ( 194) 00:21:33.826 9237.455 - 9299.870: 17.9644% ( 182) 00:21:33.826 9299.870 - 9362.286: 19.5269% ( 181) 00:21:33.826 9362.286 - 9424.701: 20.9427% ( 164) 00:21:33.826 9424.701 - 9487.116: 22.2721% ( 154) 00:21:33.826 9487.116 - 9549.531: 23.5152% ( 144) 00:21:33.826 9549.531 - 9611.947: 24.6892% ( 136) 00:21:33.826 9611.947 - 9674.362: 25.8028% ( 129) 00:21:33.826 9674.362 - 9736.777: 27.0028% ( 139) 00:21:33.826 9736.777 - 9799.192: 28.1854% ( 137) 00:21:33.826 9799.192 - 9861.608: 29.4199% ( 143) 00:21:33.826 9861.608 - 9924.023: 30.6285% ( 140) 00:21:33.826 9924.023 - 9986.438: 31.9579% ( 154) 00:21:33.826 9986.438 - 10048.853: 33.3391% ( 160) 00:21:33.826 10048.853 - 10111.269: 34.9620% ( 188) 00:21:33.826 10111.269 - 10173.684: 36.7144% ( 203) 00:21:33.826 10173.684 - 10236.099: 38.7172% ( 232) 00:21:33.826 10236.099 - 10298.514: 40.8494% ( 247) 00:21:33.826 10298.514 - 10360.930: 42.9644% ( 245) 00:21:33.826 10360.930 - 10423.345: 45.1744% ( 256) 00:21:33.826 10423.345 - 10485.760: 47.4879% ( 268) 00:21:33.826 10485.760 - 10548.175: 49.8532% ( 274) 00:21:33.826 10548.175 - 10610.590: 52.2445% ( 277) 00:21:33.826 10610.590 - 10673.006: 54.7566% ( 291) 00:21:33.826 10673.006 - 10735.421: 57.1910% ( 282) 00:21:33.826 10735.421 - 10797.836: 59.6167% ( 281) 00:21:33.826 10797.836 - 10860.251: 61.9044% ( 265) 00:21:33.826 10860.251 - 10922.667: 64.1834% ( 264) 00:21:33.826 10922.667 - 10985.082: 66.2897% ( 244) 00:21:33.826 10985.082 - 11047.497: 68.3097% ( 234) 00:21:33.826 11047.497 - 11109.912: 70.2866% ( 229) 00:21:33.826 11109.912 - 11172.328: 72.1512% ( 216) 00:21:33.826 11172.328 - 11234.743: 73.7742% ( 188) 00:21:33.826 11234.743 - 11297.158: 75.3108% ( 178) 00:21:33.826 11297.158 - 11359.573: 76.6229% ( 152) 00:21:33.826 11359.573 - 11421.989: 77.6416% ( 118) 00:21:33.826 11421.989 - 11484.404: 78.5739% ( 108) 00:21:33.826 11484.404 - 11546.819: 79.3854% ( 94) 00:21:33.826 11546.819 - 11609.234: 80.1105% ( 84) 00:21:33.826 11609.234 - 11671.650: 80.8270% ( 83) 00:21:33.826 11671.650 - 11734.065: 81.5694% ( 86) 00:21:33.826 11734.065 - 11796.480: 82.3118% ( 86) 00:21:33.826 11796.480 - 11858.895: 83.0887% ( 90) 00:21:33.826 11858.895 - 11921.310: 83.7966% ( 82) 00:21:33.826 11921.310 - 11983.726: 84.5477% ( 87) 00:21:33.826 11983.726 - 12046.141: 85.1519% ( 70) 00:21:33.826 12046.141 - 12108.556: 85.5577% ( 47) 00:21:33.826 12108.556 - 12170.971: 85.9720% ( 48) 00:21:33.826 12170.971 - 12233.387: 86.3260% ( 41) 00:21:33.826 12233.387 - 12295.802: 86.7317% ( 47) 00:21:33.826 12295.802 - 12358.217: 87.1288% ( 46) 00:21:33.826 12358.217 - 12420.632: 87.5691% ( 51) 00:21:33.826 12420.632 - 12483.048: 88.0007% ( 50) 00:21:33.826 12483.048 - 12545.463: 88.3978% ( 46) 00:21:33.826 12545.463 - 12607.878: 88.8035% ( 47) 00:21:33.826 12607.878 - 12670.293: 89.1402% ( 39) 00:21:33.826 12670.293 - 12732.709: 89.4855% ( 40) 00:21:33.826 12732.709 - 12795.124: 89.8481% ( 42) 00:21:33.826 12795.124 - 12857.539: 90.1070% ( 30) 00:21:33.826 12857.539 - 12919.954: 90.3315% ( 26) 00:21:33.826 12919.954 - 12982.370: 90.5128% ( 21) 00:21:33.826 12982.370 - 13044.785: 90.6768% ( 19) 00:21:33.826 13044.785 - 13107.200: 90.7977% ( 14) 00:21:33.826 13107.200 - 13169.615: 90.9358% ( 16) 00:21:33.826 13169.615 - 13232.030: 91.1084% ( 20) 00:21:33.826 13232.030 - 13294.446: 91.2811% ( 20) 00:21:33.826 13294.446 - 13356.861: 91.4451% ( 19) 00:21:33.826 13356.861 - 13419.276: 91.6005% ( 18) 00:21:33.826 13419.276 - 13481.691: 91.7818% ( 21) 00:21:33.826 13481.691 - 13544.107: 91.9285% ( 17) 00:21:33.826 13544.107 - 13606.522: 92.0753% ( 17) 00:21:33.826 13606.522 - 13668.937: 92.2134% ( 16) 00:21:33.826 13668.937 - 13731.352: 92.4119% ( 23) 00:21:33.826 13731.352 - 13793.768: 92.6105% ( 23) 00:21:33.826 13793.768 - 13856.183: 92.8349% ( 26) 00:21:33.826 13856.183 - 13918.598: 93.0335% ( 23) 00:21:33.826 13918.598 - 13981.013: 93.3184% ( 33) 00:21:33.826 13981.013 - 14043.429: 93.5428% ( 26) 00:21:33.826 14043.429 - 14105.844: 93.8104% ( 31) 00:21:33.826 14105.844 - 14168.259: 94.0090% ( 23) 00:21:33.826 14168.259 - 14230.674: 94.2248% ( 25) 00:21:33.826 14230.674 - 14293.090: 94.3629% ( 16) 00:21:33.826 14293.090 - 14355.505: 94.5010% ( 16) 00:21:33.826 14355.505 - 14417.920: 94.6651% ( 19) 00:21:33.826 14417.920 - 14480.335: 94.8463% ( 21) 00:21:33.826 14480.335 - 14542.750: 95.0276% ( 21) 00:21:33.826 14542.750 - 14605.166: 95.2089% ( 21) 00:21:33.826 14605.166 - 14667.581: 95.3729% ( 19) 00:21:33.826 14667.581 - 14729.996: 95.5628% ( 22) 00:21:33.826 14729.996 - 14792.411: 95.7355% ( 20) 00:21:33.826 14792.411 - 14854.827: 95.8736% ( 16) 00:21:33.826 14854.827 - 14917.242: 95.9513% ( 9) 00:21:33.826 14917.242 - 14979.657: 96.0204% ( 8) 00:21:33.826 14979.657 - 15042.072: 96.1067% ( 10) 00:21:33.826 15042.072 - 15104.488: 96.1585% ( 6) 00:21:33.826 15104.488 - 15166.903: 96.2276% ( 8) 00:21:33.826 15166.903 - 15229.318: 96.2794% ( 6) 00:21:33.826 15229.318 - 15291.733: 96.3398% ( 7) 00:21:33.826 15291.733 - 15354.149: 96.4002% ( 7) 00:21:33.826 15354.149 - 15416.564: 96.4520% ( 6) 00:21:33.826 15416.564 - 15478.979: 96.5038% ( 6) 00:21:33.826 15478.979 - 15541.394: 96.5383% ( 4) 00:21:33.826 15541.394 - 15603.810: 96.5642% ( 3) 00:21:33.826 15603.810 - 15666.225: 96.5901% ( 3) 00:21:33.826 15666.225 - 15728.640: 96.6160% ( 3) 00:21:33.826 15728.640 - 15791.055: 96.6419% ( 3) 00:21:33.826 15791.055 - 15853.470: 96.6678% ( 3) 00:21:33.826 15853.470 - 15915.886: 96.6851% ( 2) 00:21:33.826 16103.131 - 16227.962: 96.7110% ( 3) 00:21:33.826 16227.962 - 16352.792: 96.8059% ( 11) 00:21:33.826 16352.792 - 16477.623: 96.9268% ( 14) 00:21:33.826 16477.623 - 16602.453: 97.0477% ( 14) 00:21:33.826 16602.453 - 16727.284: 97.1599% ( 13) 00:21:33.826 16727.284 - 16852.114: 97.2894% ( 15) 00:21:33.826 16852.114 - 16976.945: 97.4102% ( 14) 00:21:33.826 16976.945 - 17101.775: 97.5397% ( 15) 00:21:33.826 17101.775 - 17226.606: 97.6519% ( 13) 00:21:33.826 17226.606 - 17351.436: 97.7296% ( 9) 00:21:33.826 17351.436 - 17476.267: 97.7814% ( 6) 00:21:33.826 17476.267 - 17601.097: 97.7901% ( 1) 00:21:33.826 17601.097 - 17725.928: 97.8073% ( 2) 00:21:33.826 17725.928 - 17850.758: 97.8850% ( 9) 00:21:33.826 17850.758 - 17975.589: 97.9713% ( 10) 00:21:33.826 17975.589 - 18100.419: 98.1095% ( 16) 00:21:33.826 18100.419 - 18225.250: 98.2390% ( 15) 00:21:33.826 18225.250 - 18350.080: 98.3943% ( 18) 00:21:33.826 18350.080 - 18474.910: 98.5238% ( 15) 00:21:33.826 18474.910 - 18599.741: 98.6619% ( 16) 00:21:33.826 18599.741 - 18724.571: 98.7483% ( 10) 00:21:33.826 18724.571 - 18849.402: 98.8087% ( 7) 00:21:33.826 18849.402 - 18974.232: 98.8778% ( 8) 00:21:33.826 18974.232 - 19099.063: 98.8950% ( 2) 00:21:33.826 36700.160 - 36949.821: 98.9037% ( 1) 00:21:33.826 36949.821 - 37199.482: 98.9727% ( 8) 00:21:33.826 37199.482 - 37449.143: 99.0331% ( 7) 00:21:33.826 37449.143 - 37698.804: 99.0936% ( 7) 00:21:33.826 37698.804 - 37948.465: 99.1626% ( 8) 00:21:33.826 37948.465 - 38198.126: 99.2317% ( 8) 00:21:33.826 38198.126 - 38447.787: 99.2835% ( 6) 00:21:33.826 38447.787 - 38697.448: 99.3526% ( 8) 00:21:33.826 38697.448 - 38947.109: 99.4130% ( 7) 00:21:33.826 38947.109 - 39196.770: 99.4475% ( 4) 00:21:33.826 43690.667 - 43940.328: 99.5079% ( 7) 00:21:33.826 43940.328 - 44189.989: 99.5684% ( 7) 00:21:33.826 44189.989 - 44439.650: 99.6288% ( 7) 00:21:33.826 44439.650 - 44689.310: 99.6892% ( 7) 00:21:33.826 44689.310 - 44938.971: 99.7583% ( 8) 00:21:33.826 44938.971 - 45188.632: 99.8101% ( 6) 00:21:33.826 45188.632 - 45438.293: 99.8705% ( 7) 00:21:33.826 45438.293 - 45687.954: 99.9396% ( 8) 00:21:33.826 45687.954 - 45937.615: 100.0000% ( 7) 00:21:33.826 00:21:33.826 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:21:33.826 ============================================================================== 00:21:33.826 Range in us Cumulative IO count 00:21:33.826 8176.396 - 8238.811: 0.0173% ( 2) 00:21:33.826 8238.811 - 8301.227: 0.1209% ( 12) 00:21:33.826 8301.227 - 8363.642: 0.3367% ( 25) 00:21:33.826 8363.642 - 8426.057: 0.6647% ( 38) 00:21:33.826 8426.057 - 8488.472: 1.0704% ( 47) 00:21:33.826 8488.472 - 8550.888: 1.6402% ( 66) 00:21:33.826 8550.888 - 8613.303: 2.3912% ( 87) 00:21:33.826 8613.303 - 8675.718: 3.1595% ( 89) 00:21:33.826 8675.718 - 8738.133: 4.1005% ( 109) 00:21:33.826 8738.133 - 8800.549: 5.2831% ( 137) 00:21:33.826 8800.549 - 8862.964: 6.6126% ( 154) 00:21:33.826 8862.964 - 8925.379: 8.2441% ( 189) 00:21:33.826 8925.379 - 8987.794: 9.8757% ( 189) 00:21:33.826 8987.794 - 9050.210: 11.5504% ( 194) 00:21:33.826 9050.210 - 9112.625: 13.2769% ( 200) 00:21:33.826 9112.625 - 9175.040: 14.9517% ( 194) 00:21:33.826 9175.040 - 9237.455: 16.5660% ( 187) 00:21:33.827 9237.455 - 9299.870: 18.1544% ( 184) 00:21:33.827 9299.870 - 9362.286: 19.7082% ( 180) 00:21:33.827 9362.286 - 9424.701: 21.0894% ( 160) 00:21:33.827 9424.701 - 9487.116: 22.3757% ( 149) 00:21:33.827 9487.116 - 9549.531: 23.5497% ( 136) 00:21:33.827 9549.531 - 9611.947: 24.7324% ( 137) 00:21:33.827 9611.947 - 9674.362: 26.0014% ( 147) 00:21:33.827 9674.362 - 9736.777: 27.1409% ( 132) 00:21:33.827 9736.777 - 9799.192: 28.2113% ( 124) 00:21:33.827 9799.192 - 9861.608: 29.2990% ( 126) 00:21:33.827 9861.608 - 9924.023: 30.4558% ( 134) 00:21:33.827 9924.023 - 9986.438: 31.6730% ( 141) 00:21:33.827 9986.438 - 10048.853: 33.2269% ( 180) 00:21:33.827 10048.853 - 10111.269: 34.8325% ( 186) 00:21:33.827 10111.269 - 10173.684: 36.5504% ( 199) 00:21:33.827 10173.684 - 10236.099: 38.3805% ( 212) 00:21:33.827 10236.099 - 10298.514: 40.3401% ( 227) 00:21:33.827 10298.514 - 10360.930: 42.4206% ( 241) 00:21:33.827 10360.930 - 10423.345: 44.7773% ( 273) 00:21:33.827 10423.345 - 10485.760: 47.1081% ( 270) 00:21:33.827 10485.760 - 10548.175: 49.5511% ( 283) 00:21:33.827 10548.175 - 10610.590: 52.0114% ( 285) 00:21:33.827 10610.590 - 10673.006: 54.4717% ( 285) 00:21:33.827 10673.006 - 10735.421: 56.9751% ( 290) 00:21:33.827 10735.421 - 10797.836: 59.4613% ( 288) 00:21:33.827 10797.836 - 10860.251: 61.7662% ( 267) 00:21:33.827 10860.251 - 10922.667: 64.1143% ( 272) 00:21:33.827 10922.667 - 10985.082: 66.4192% ( 267) 00:21:33.827 10985.082 - 11047.497: 68.4738% ( 238) 00:21:33.827 11047.497 - 11109.912: 70.4593% ( 230) 00:21:33.827 11109.912 - 11172.328: 72.2980% ( 213) 00:21:33.827 11172.328 - 11234.743: 73.9555% ( 192) 00:21:33.827 11234.743 - 11297.158: 75.5007% ( 179) 00:21:33.827 11297.158 - 11359.573: 76.7697% ( 147) 00:21:33.827 11359.573 - 11421.989: 77.8747% ( 128) 00:21:33.827 11421.989 - 11484.404: 78.8070% ( 108) 00:21:33.827 11484.404 - 11546.819: 79.5494% ( 86) 00:21:33.827 11546.819 - 11609.234: 80.3177% ( 89) 00:21:33.827 11609.234 - 11671.650: 81.1205% ( 93) 00:21:33.827 11671.650 - 11734.065: 81.8974% ( 90) 00:21:33.827 11734.065 - 11796.480: 82.7003% ( 93) 00:21:33.827 11796.480 - 11858.895: 83.5031% ( 93) 00:21:33.827 11858.895 - 11921.310: 84.1851% ( 79) 00:21:33.827 11921.310 - 11983.726: 84.8066% ( 72) 00:21:33.827 11983.726 - 12046.141: 85.3160% ( 59) 00:21:33.827 12046.141 - 12108.556: 85.7821% ( 54) 00:21:33.827 12108.556 - 12170.971: 86.2051% ( 49) 00:21:33.827 12170.971 - 12233.387: 86.5936% ( 45) 00:21:33.827 12233.387 - 12295.802: 86.9907% ( 46) 00:21:33.827 12295.802 - 12358.217: 87.3619% ( 43) 00:21:33.827 12358.217 - 12420.632: 87.7590% ( 46) 00:21:33.827 12420.632 - 12483.048: 88.1647% ( 47) 00:21:33.827 12483.048 - 12545.463: 88.5704% ( 47) 00:21:33.827 12545.463 - 12607.878: 88.9244% ( 41) 00:21:33.827 12607.878 - 12670.293: 89.2956% ( 43) 00:21:33.827 12670.293 - 12732.709: 89.6409% ( 40) 00:21:33.827 12732.709 - 12795.124: 89.9862% ( 40) 00:21:33.827 12795.124 - 12857.539: 90.2538% ( 31) 00:21:33.827 12857.539 - 12919.954: 90.5041% ( 29) 00:21:33.827 12919.954 - 12982.370: 90.7113% ( 24) 00:21:33.827 12982.370 - 13044.785: 90.9530% ( 28) 00:21:33.827 13044.785 - 13107.200: 91.1689% ( 25) 00:21:33.827 13107.200 - 13169.615: 91.4192% ( 29) 00:21:33.827 13169.615 - 13232.030: 91.6523% ( 27) 00:21:33.827 13232.030 - 13294.446: 91.8508% ( 23) 00:21:33.827 13294.446 - 13356.861: 92.0666% ( 25) 00:21:33.827 13356.861 - 13419.276: 92.2479% ( 21) 00:21:33.827 13419.276 - 13481.691: 92.4033% ( 18) 00:21:33.827 13481.691 - 13544.107: 92.5932% ( 22) 00:21:33.827 13544.107 - 13606.522: 92.7745% ( 21) 00:21:33.827 13606.522 - 13668.937: 92.9903% ( 25) 00:21:33.827 13668.937 - 13731.352: 93.1975% ( 24) 00:21:33.827 13731.352 - 13793.768: 93.4047% ( 24) 00:21:33.827 13793.768 - 13856.183: 93.5860% ( 21) 00:21:33.827 13856.183 - 13918.598: 93.8018% ( 25) 00:21:33.827 13918.598 - 13981.013: 93.9744% ( 20) 00:21:33.827 13981.013 - 14043.429: 94.1557% ( 21) 00:21:33.827 14043.429 - 14105.844: 94.3025% ( 17) 00:21:33.827 14105.844 - 14168.259: 94.4406% ( 16) 00:21:33.827 14168.259 - 14230.674: 94.5960% ( 18) 00:21:33.827 14230.674 - 14293.090: 94.7169% ( 14) 00:21:33.827 14293.090 - 14355.505: 94.8291% ( 13) 00:21:33.827 14355.505 - 14417.920: 94.9240% ( 11) 00:21:33.827 14417.920 - 14480.335: 95.0363% ( 13) 00:21:33.827 14480.335 - 14542.750: 95.1571% ( 14) 00:21:33.827 14542.750 - 14605.166: 95.2693% ( 13) 00:21:33.827 14605.166 - 14667.581: 95.3557% ( 10) 00:21:33.827 14667.581 - 14729.996: 95.4075% ( 6) 00:21:33.827 14729.996 - 14792.411: 95.4593% ( 6) 00:21:33.827 14792.411 - 14854.827: 95.5369% ( 9) 00:21:33.827 14854.827 - 14917.242: 95.5974% ( 7) 00:21:33.827 14917.242 - 14979.657: 95.6578% ( 7) 00:21:33.827 14979.657 - 15042.072: 95.6837% ( 3) 00:21:33.827 15042.072 - 15104.488: 95.7269% ( 5) 00:21:33.827 15104.488 - 15166.903: 95.7528% ( 3) 00:21:33.827 15166.903 - 15229.318: 95.7873% ( 4) 00:21:33.827 15229.318 - 15291.733: 95.8477% ( 7) 00:21:33.827 15291.733 - 15354.149: 95.9081% ( 7) 00:21:33.827 15354.149 - 15416.564: 95.9772% ( 8) 00:21:33.827 15416.564 - 15478.979: 96.0463% ( 8) 00:21:33.827 15478.979 - 15541.394: 96.1067% ( 7) 00:21:33.827 15541.394 - 15603.810: 96.1758% ( 8) 00:21:33.827 15603.810 - 15666.225: 96.2362% ( 7) 00:21:33.827 15666.225 - 15728.640: 96.3139% ( 9) 00:21:33.827 15728.640 - 15791.055: 96.4002% ( 10) 00:21:33.827 15791.055 - 15853.470: 96.5038% ( 12) 00:21:33.827 15853.470 - 15915.886: 96.5901% ( 10) 00:21:33.827 15915.886 - 15978.301: 96.6851% ( 11) 00:21:33.827 15978.301 - 16103.131: 96.8318% ( 17) 00:21:33.827 16103.131 - 16227.962: 96.9441% ( 13) 00:21:33.827 16227.962 - 16352.792: 97.0045% ( 7) 00:21:33.827 16352.792 - 16477.623: 97.0735% ( 8) 00:21:33.827 16477.623 - 16602.453: 97.1340% ( 7) 00:21:33.827 16602.453 - 16727.284: 97.2030% ( 8) 00:21:33.827 16727.284 - 16852.114: 97.2376% ( 4) 00:21:33.827 16976.945 - 17101.775: 97.2635% ( 3) 00:21:33.827 17101.775 - 17226.606: 97.3239% ( 7) 00:21:33.827 17226.606 - 17351.436: 97.3843% ( 7) 00:21:33.827 17351.436 - 17476.267: 97.4448% ( 7) 00:21:33.827 17476.267 - 17601.097: 97.5052% ( 7) 00:21:33.827 17601.097 - 17725.928: 97.5656% ( 7) 00:21:33.827 17725.928 - 17850.758: 97.6260% ( 7) 00:21:33.827 17850.758 - 17975.589: 97.6865% ( 7) 00:21:33.827 17975.589 - 18100.419: 97.7469% ( 7) 00:21:33.827 18100.419 - 18225.250: 97.8073% ( 7) 00:21:33.827 18225.250 - 18350.080: 97.9368% ( 15) 00:21:33.827 18350.080 - 18474.910: 98.0663% ( 15) 00:21:33.827 18474.910 - 18599.741: 98.2131% ( 17) 00:21:33.827 18599.741 - 18724.571: 98.3339% ( 14) 00:21:33.827 18724.571 - 18849.402: 98.4720% ( 16) 00:21:33.827 18849.402 - 18974.232: 98.6102% ( 16) 00:21:33.827 18974.232 - 19099.063: 98.7569% ( 17) 00:21:33.827 19099.063 - 19223.893: 98.8432% ( 10) 00:21:33.827 19223.893 - 19348.724: 98.8950% ( 6) 00:21:33.827 34702.872 - 34952.533: 98.9382% ( 5) 00:21:33.827 34952.533 - 35202.194: 98.9986% ( 7) 00:21:33.827 35202.194 - 35451.855: 99.0590% ( 7) 00:21:33.827 35451.855 - 35701.516: 99.1281% ( 8) 00:21:33.827 35701.516 - 35951.177: 99.1972% ( 8) 00:21:33.827 35951.177 - 36200.838: 99.2576% ( 7) 00:21:33.827 36200.838 - 36450.499: 99.3267% ( 8) 00:21:33.827 36450.499 - 36700.160: 99.3871% ( 7) 00:21:33.827 36700.160 - 36949.821: 99.4475% ( 7) 00:21:33.827 41443.718 - 41693.379: 99.4820% ( 4) 00:21:33.827 41693.379 - 41943.040: 99.5425% ( 7) 00:21:33.827 41943.040 - 42192.701: 99.6115% ( 8) 00:21:33.827 42192.701 - 42442.362: 99.6633% ( 6) 00:21:33.827 42442.362 - 42692.023: 99.7238% ( 7) 00:21:33.827 42692.023 - 42941.684: 99.7842% ( 7) 00:21:33.827 42941.684 - 43191.345: 99.8446% ( 7) 00:21:33.827 43191.345 - 43441.006: 99.9137% ( 8) 00:21:33.827 43441.006 - 43690.667: 99.9741% ( 7) 00:21:33.827 43690.667 - 43940.328: 100.0000% ( 3) 00:21:33.827 00:21:33.827 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:21:33.827 ============================================================================== 00:21:33.827 Range in us Cumulative IO count 00:21:33.827 8238.811 - 8301.227: 0.1036% ( 12) 00:21:33.827 8301.227 - 8363.642: 0.3194% ( 25) 00:21:33.827 8363.642 - 8426.057: 0.6388% ( 37) 00:21:33.827 8426.057 - 8488.472: 1.0704% ( 50) 00:21:33.827 8488.472 - 8550.888: 1.6402% ( 66) 00:21:33.827 8550.888 - 8613.303: 2.3912% ( 87) 00:21:33.827 8613.303 - 8675.718: 3.2113% ( 95) 00:21:33.827 8675.718 - 8738.133: 4.2213% ( 117) 00:21:33.827 8738.133 - 8800.549: 5.3867% ( 135) 00:21:33.827 8800.549 - 8862.964: 6.7939% ( 163) 00:21:33.827 8862.964 - 8925.379: 8.3477% ( 180) 00:21:33.827 8925.379 - 8987.794: 9.8930% ( 179) 00:21:33.827 8987.794 - 9050.210: 11.6022% ( 198) 00:21:33.827 9050.210 - 9112.625: 13.3287% ( 200) 00:21:33.827 9112.625 - 9175.040: 15.0811% ( 203) 00:21:33.827 9175.040 - 9237.455: 16.8163% ( 201) 00:21:33.827 9237.455 - 9299.870: 18.4392% ( 188) 00:21:33.827 9299.870 - 9362.286: 19.9845% ( 179) 00:21:33.827 9362.286 - 9424.701: 21.4520% ( 170) 00:21:33.827 9424.701 - 9487.116: 22.7124% ( 146) 00:21:33.827 9487.116 - 9549.531: 23.9296% ( 141) 00:21:33.827 9549.531 - 9611.947: 25.1381% ( 140) 00:21:33.827 9611.947 - 9674.362: 26.3122% ( 136) 00:21:33.827 9674.362 - 9736.777: 27.3740% ( 123) 00:21:33.827 9736.777 - 9799.192: 28.4617% ( 126) 00:21:33.827 9799.192 - 9861.608: 29.6184% ( 134) 00:21:33.827 9861.608 - 9924.023: 30.7148% ( 127) 00:21:33.827 9924.023 - 9986.438: 31.9320% ( 141) 00:21:33.827 9986.438 - 10048.853: 33.2096% ( 148) 00:21:33.827 10048.853 - 10111.269: 34.7721% ( 181) 00:21:33.827 10111.269 - 10173.684: 36.4468% ( 194) 00:21:33.827 10173.684 - 10236.099: 38.2597% ( 210) 00:21:33.827 10236.099 - 10298.514: 40.2020% ( 225) 00:21:33.827 10298.514 - 10360.930: 42.2393% ( 236) 00:21:33.828 10360.930 - 10423.345: 44.4233% ( 253) 00:21:33.828 10423.345 - 10485.760: 46.6851% ( 262) 00:21:33.828 10485.760 - 10548.175: 49.0677% ( 276) 00:21:33.828 10548.175 - 10610.590: 51.4762% ( 279) 00:21:33.828 10610.590 - 10673.006: 53.8760% ( 278) 00:21:33.828 10673.006 - 10735.421: 56.4140% ( 294) 00:21:33.828 10735.421 - 10797.836: 58.9434% ( 293) 00:21:33.828 10797.836 - 10860.251: 61.3864% ( 283) 00:21:33.828 10860.251 - 10922.667: 63.7604% ( 275) 00:21:33.828 10922.667 - 10985.082: 65.9617% ( 255) 00:21:33.828 10985.082 - 11047.497: 68.0249% ( 239) 00:21:33.828 11047.497 - 11109.912: 69.9413% ( 222) 00:21:33.828 11109.912 - 11172.328: 71.8750% ( 224) 00:21:33.828 11172.328 - 11234.743: 73.6706% ( 208) 00:21:33.828 11234.743 - 11297.158: 75.2762% ( 186) 00:21:33.828 11297.158 - 11359.573: 76.6316% ( 157) 00:21:33.828 11359.573 - 11421.989: 77.6502% ( 118) 00:21:33.828 11421.989 - 11484.404: 78.6775% ( 119) 00:21:33.828 11484.404 - 11546.819: 79.5235% ( 98) 00:21:33.828 11546.819 - 11609.234: 80.3436% ( 95) 00:21:33.828 11609.234 - 11671.650: 81.2155% ( 101) 00:21:33.828 11671.650 - 11734.065: 82.0097% ( 92) 00:21:33.828 11734.065 - 11796.480: 82.7521% ( 86) 00:21:33.828 11796.480 - 11858.895: 83.5204% ( 89) 00:21:33.828 11858.895 - 11921.310: 84.1592% ( 74) 00:21:33.828 11921.310 - 11983.726: 84.7894% ( 73) 00:21:33.828 11983.726 - 12046.141: 85.2728% ( 56) 00:21:33.828 12046.141 - 12108.556: 85.7994% ( 61) 00:21:33.828 12108.556 - 12170.971: 86.2051% ( 47) 00:21:33.828 12170.971 - 12233.387: 86.5936% ( 45) 00:21:33.828 12233.387 - 12295.802: 87.0252% ( 50) 00:21:33.828 12295.802 - 12358.217: 87.4914% ( 54) 00:21:33.828 12358.217 - 12420.632: 87.9662% ( 55) 00:21:33.828 12420.632 - 12483.048: 88.4151% ( 52) 00:21:33.828 12483.048 - 12545.463: 88.8381% ( 49) 00:21:33.828 12545.463 - 12607.878: 89.2265% ( 45) 00:21:33.828 12607.878 - 12670.293: 89.5718% ( 40) 00:21:33.828 12670.293 - 12732.709: 89.9258% ( 41) 00:21:33.828 12732.709 - 12795.124: 90.2711% ( 40) 00:21:33.828 12795.124 - 12857.539: 90.5646% ( 34) 00:21:33.828 12857.539 - 12919.954: 90.8581% ( 34) 00:21:33.828 12919.954 - 12982.370: 91.1257% ( 31) 00:21:33.828 12982.370 - 13044.785: 91.4192% ( 34) 00:21:33.828 13044.785 - 13107.200: 91.7300% ( 36) 00:21:33.828 13107.200 - 13169.615: 91.9890% ( 30) 00:21:33.828 13169.615 - 13232.030: 92.2307% ( 28) 00:21:33.828 13232.030 - 13294.446: 92.4637% ( 27) 00:21:33.828 13294.446 - 13356.861: 92.6278% ( 19) 00:21:33.828 13356.861 - 13419.276: 92.7573% ( 15) 00:21:33.828 13419.276 - 13481.691: 92.9040% ( 17) 00:21:33.828 13481.691 - 13544.107: 93.1026% ( 23) 00:21:33.828 13544.107 - 13606.522: 93.2925% ( 22) 00:21:33.828 13606.522 - 13668.937: 93.4565% ( 19) 00:21:33.828 13668.937 - 13731.352: 93.6291% ( 20) 00:21:33.828 13731.352 - 13793.768: 93.8018% ( 20) 00:21:33.828 13793.768 - 13856.183: 93.9485% ( 17) 00:21:33.828 13856.183 - 13918.598: 94.0953% ( 17) 00:21:33.828 13918.598 - 13981.013: 94.1989% ( 12) 00:21:33.828 13981.013 - 14043.429: 94.2766% ( 9) 00:21:33.828 14043.429 - 14105.844: 94.3629% ( 10) 00:21:33.828 14105.844 - 14168.259: 94.4492% ( 10) 00:21:33.828 14168.259 - 14230.674: 94.5356% ( 10) 00:21:33.828 14230.674 - 14293.090: 94.6392% ( 12) 00:21:33.828 14293.090 - 14355.505: 94.7255% ( 10) 00:21:33.828 14355.505 - 14417.920: 94.8032% ( 9) 00:21:33.828 14417.920 - 14480.335: 94.8809% ( 9) 00:21:33.828 14480.335 - 14542.750: 94.9499% ( 8) 00:21:33.828 14542.750 - 14605.166: 95.0017% ( 6) 00:21:33.828 14605.166 - 14667.581: 95.0449% ( 5) 00:21:33.828 14667.581 - 14729.996: 95.0967% ( 6) 00:21:33.828 14729.996 - 14792.411: 95.1485% ( 6) 00:21:33.828 14792.411 - 14854.827: 95.1916% ( 5) 00:21:33.828 14854.827 - 14917.242: 95.2434% ( 6) 00:21:33.828 14917.242 - 14979.657: 95.2866% ( 5) 00:21:33.828 14979.657 - 15042.072: 95.3470% ( 7) 00:21:33.828 15042.072 - 15104.488: 95.3902% ( 5) 00:21:33.828 15104.488 - 15166.903: 95.4334% ( 5) 00:21:33.828 15166.903 - 15229.318: 95.4593% ( 3) 00:21:33.828 15229.318 - 15291.733: 95.4852% ( 3) 00:21:33.828 15291.733 - 15354.149: 95.5110% ( 3) 00:21:33.828 15354.149 - 15416.564: 95.5369% ( 3) 00:21:33.828 15416.564 - 15478.979: 95.5801% ( 5) 00:21:33.828 15478.979 - 15541.394: 95.6319% ( 6) 00:21:33.828 15541.394 - 15603.810: 95.6578% ( 3) 00:21:33.828 15603.810 - 15666.225: 95.6923% ( 4) 00:21:33.828 15666.225 - 15728.640: 95.7182% ( 3) 00:21:33.828 15728.640 - 15791.055: 95.7614% ( 5) 00:21:33.828 15791.055 - 15853.470: 95.8218% ( 7) 00:21:33.828 15853.470 - 15915.886: 95.8823% ( 7) 00:21:33.828 15915.886 - 15978.301: 95.9513% ( 8) 00:21:33.828 15978.301 - 16103.131: 96.1153% ( 19) 00:21:33.828 16103.131 - 16227.962: 96.3139% ( 23) 00:21:33.828 16227.962 - 16352.792: 96.5038% ( 22) 00:21:33.828 16352.792 - 16477.623: 96.7023% ( 23) 00:21:33.828 16477.623 - 16602.453: 96.8750% ( 20) 00:21:33.828 16602.453 - 16727.284: 97.0563% ( 21) 00:21:33.828 16727.284 - 16852.114: 97.2376% ( 21) 00:21:33.828 16852.114 - 16976.945: 97.3930% ( 18) 00:21:33.828 16976.945 - 17101.775: 97.4706% ( 9) 00:21:33.828 17101.775 - 17226.606: 97.5224% ( 6) 00:21:33.828 17226.606 - 17351.436: 97.5829% ( 7) 00:21:33.828 17351.436 - 17476.267: 97.6260% ( 5) 00:21:33.828 17476.267 - 17601.097: 97.6865% ( 7) 00:21:33.828 17601.097 - 17725.928: 97.7383% ( 6) 00:21:33.828 17725.928 - 17850.758: 97.7901% ( 6) 00:21:33.828 18350.080 - 18474.910: 97.8419% ( 6) 00:21:33.828 18474.910 - 18599.741: 97.9713% ( 15) 00:21:33.828 18599.741 - 18724.571: 98.1008% ( 15) 00:21:33.828 18724.571 - 18849.402: 98.2217% ( 14) 00:21:33.828 18849.402 - 18974.232: 98.3684% ( 17) 00:21:33.828 18974.232 - 19099.063: 98.4979% ( 15) 00:21:33.828 19099.063 - 19223.893: 98.6447% ( 17) 00:21:33.828 19223.893 - 19348.724: 98.7655% ( 14) 00:21:33.828 19348.724 - 19473.554: 98.8519% ( 10) 00:21:33.828 19473.554 - 19598.385: 98.8950% ( 5) 00:21:33.828 31706.941 - 31831.771: 98.9209% ( 3) 00:21:33.828 31831.771 - 31956.602: 98.9468% ( 3) 00:21:33.828 31956.602 - 32206.263: 99.0159% ( 8) 00:21:33.828 32206.263 - 32455.924: 99.0849% ( 8) 00:21:33.828 32455.924 - 32705.585: 99.1540% ( 8) 00:21:33.828 32705.585 - 32955.246: 99.2058% ( 6) 00:21:33.828 32955.246 - 33204.907: 99.2749% ( 8) 00:21:33.828 33204.907 - 33454.568: 99.3353% ( 7) 00:21:33.828 33454.568 - 33704.229: 99.4044% ( 8) 00:21:33.828 33704.229 - 33953.890: 99.4302% ( 3) 00:21:33.828 34203.550 - 34453.211: 99.4475% ( 2) 00:21:33.828 38697.448 - 38947.109: 99.4561% ( 1) 00:21:33.828 38947.109 - 39196.770: 99.5166% ( 7) 00:21:33.828 39196.770 - 39446.430: 99.5856% ( 8) 00:21:33.828 39446.430 - 39696.091: 99.6461% ( 7) 00:21:33.828 39696.091 - 39945.752: 99.7151% ( 8) 00:21:33.828 39945.752 - 40195.413: 99.7842% ( 8) 00:21:33.828 40195.413 - 40445.074: 99.8446% ( 7) 00:21:33.828 40445.074 - 40694.735: 99.9050% ( 7) 00:21:33.828 40694.735 - 40944.396: 99.9655% ( 7) 00:21:33.828 40944.396 - 41194.057: 100.0000% ( 4) 00:21:33.828 00:21:33.828 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:21:33.828 ============================================================================== 00:21:33.828 Range in us Cumulative IO count 00:21:33.828 8238.811 - 8301.227: 0.1122% ( 13) 00:21:33.828 8301.227 - 8363.642: 0.2762% ( 19) 00:21:33.828 8363.642 - 8426.057: 0.5698% ( 34) 00:21:33.828 8426.057 - 8488.472: 1.0445% ( 55) 00:21:33.828 8488.472 - 8550.888: 1.6661% ( 72) 00:21:33.828 8550.888 - 8613.303: 2.3912% ( 84) 00:21:33.828 8613.303 - 8675.718: 3.2113% ( 95) 00:21:33.828 8675.718 - 8738.133: 4.3336% ( 130) 00:21:33.828 8738.133 - 8800.549: 5.5335% ( 139) 00:21:33.828 8800.549 - 8862.964: 6.9233% ( 161) 00:21:33.828 8862.964 - 8925.379: 8.4168% ( 173) 00:21:33.828 8925.379 - 8987.794: 10.0311% ( 187) 00:21:33.828 8987.794 - 9050.210: 11.6195% ( 184) 00:21:33.828 9050.210 - 9112.625: 13.3201% ( 197) 00:21:33.828 9112.625 - 9175.040: 15.0639% ( 202) 00:21:33.828 9175.040 - 9237.455: 16.7041% ( 190) 00:21:33.828 9237.455 - 9299.870: 18.2925% ( 184) 00:21:33.828 9299.870 - 9362.286: 19.8118% ( 176) 00:21:33.828 9362.286 - 9424.701: 21.2103% ( 162) 00:21:33.828 9424.701 - 9487.116: 22.5224% ( 152) 00:21:33.828 9487.116 - 9549.531: 23.7137% ( 138) 00:21:33.828 9549.531 - 9611.947: 24.8619% ( 133) 00:21:33.828 9611.947 - 9674.362: 25.8287% ( 112) 00:21:33.828 9674.362 - 9736.777: 26.8646% ( 120) 00:21:33.828 9736.777 - 9799.192: 27.9178% ( 122) 00:21:33.828 9799.192 - 9861.608: 29.0919% ( 136) 00:21:33.828 9861.608 - 9924.023: 30.2314% ( 132) 00:21:33.828 9924.023 - 9986.438: 31.4831% ( 145) 00:21:33.828 9986.438 - 10048.853: 32.8557% ( 159) 00:21:33.828 10048.853 - 10111.269: 34.4441% ( 184) 00:21:33.828 10111.269 - 10173.684: 36.1447% ( 197) 00:21:33.828 10173.684 - 10236.099: 38.0007% ( 215) 00:21:33.828 10236.099 - 10298.514: 39.8653% ( 216) 00:21:33.828 10298.514 - 10360.930: 41.9285% ( 239) 00:21:33.828 10360.930 - 10423.345: 43.9054% ( 229) 00:21:33.828 10423.345 - 10485.760: 46.3311% ( 281) 00:21:33.828 10485.760 - 10548.175: 48.6188% ( 265) 00:21:33.828 10548.175 - 10610.590: 51.1395% ( 292) 00:21:33.828 10610.590 - 10673.006: 53.4444% ( 267) 00:21:33.828 10673.006 - 10735.421: 55.9133% ( 286) 00:21:33.828 10735.421 - 10797.836: 58.1923% ( 264) 00:21:33.828 10797.836 - 10860.251: 60.6181% ( 281) 00:21:33.828 10860.251 - 10922.667: 62.9748% ( 273) 00:21:33.828 10922.667 - 10985.082: 65.2279% ( 261) 00:21:33.828 10985.082 - 11047.497: 67.2220% ( 231) 00:21:33.828 11047.497 - 11109.912: 69.1903% ( 228) 00:21:33.828 11109.912 - 11172.328: 70.9945% ( 209) 00:21:33.828 11172.328 - 11234.743: 72.6433% ( 191) 00:21:33.828 11234.743 - 11297.158: 74.2317% ( 184) 00:21:33.828 11297.158 - 11359.573: 75.6302% ( 162) 00:21:33.828 11359.573 - 11421.989: 76.8128% ( 137) 00:21:33.828 11421.989 - 11484.404: 77.7365% ( 107) 00:21:33.829 11484.404 - 11546.819: 78.6430% ( 105) 00:21:33.829 11546.819 - 11609.234: 79.4544% ( 94) 00:21:33.829 11609.234 - 11671.650: 80.2831% ( 96) 00:21:33.829 11671.650 - 11734.065: 81.0860% ( 93) 00:21:33.829 11734.065 - 11796.480: 81.8974% ( 94) 00:21:33.829 11796.480 - 11858.895: 82.7262% ( 96) 00:21:33.829 11858.895 - 11921.310: 83.4168% ( 80) 00:21:33.829 11921.310 - 11983.726: 84.1937% ( 90) 00:21:33.829 11983.726 - 12046.141: 84.8757% ( 79) 00:21:33.829 12046.141 - 12108.556: 85.4972% ( 72) 00:21:33.829 12108.556 - 12170.971: 86.1015% ( 70) 00:21:33.829 12170.971 - 12233.387: 86.6367% ( 62) 00:21:33.829 12233.387 - 12295.802: 87.1115% ( 55) 00:21:33.829 12295.802 - 12358.217: 87.5777% ( 54) 00:21:33.829 12358.217 - 12420.632: 88.0698% ( 57) 00:21:33.829 12420.632 - 12483.048: 88.5877% ( 60) 00:21:33.829 12483.048 - 12545.463: 89.0625% ( 55) 00:21:33.829 12545.463 - 12607.878: 89.4941% ( 50) 00:21:33.829 12607.878 - 12670.293: 89.8394% ( 40) 00:21:33.829 12670.293 - 12732.709: 90.2538% ( 48) 00:21:33.829 12732.709 - 12795.124: 90.6768% ( 49) 00:21:33.829 12795.124 - 12857.539: 91.0394% ( 42) 00:21:33.829 12857.539 - 12919.954: 91.3415% ( 35) 00:21:33.829 12919.954 - 12982.370: 91.6523% ( 36) 00:21:33.829 12982.370 - 13044.785: 91.8681% ( 25) 00:21:33.829 13044.785 - 13107.200: 92.0925% ( 26) 00:21:33.829 13107.200 - 13169.615: 92.2997% ( 24) 00:21:33.829 13169.615 - 13232.030: 92.5242% ( 26) 00:21:33.829 13232.030 - 13294.446: 92.7659% ( 28) 00:21:33.829 13294.446 - 13356.861: 92.9299% ( 19) 00:21:33.829 13356.861 - 13419.276: 93.0508% ( 14) 00:21:33.829 13419.276 - 13481.691: 93.2061% ( 18) 00:21:33.829 13481.691 - 13544.107: 93.3270% ( 14) 00:21:33.829 13544.107 - 13606.522: 93.4479% ( 14) 00:21:33.829 13606.522 - 13668.937: 93.5601% ( 13) 00:21:33.829 13668.937 - 13731.352: 93.6896% ( 15) 00:21:33.829 13731.352 - 13793.768: 93.8191% ( 15) 00:21:33.829 13793.768 - 13856.183: 93.9485% ( 15) 00:21:33.829 13856.183 - 13918.598: 94.0435% ( 11) 00:21:33.829 13918.598 - 13981.013: 94.1557% ( 13) 00:21:33.829 13981.013 - 14043.429: 94.2421% ( 10) 00:21:33.829 14043.429 - 14105.844: 94.3370% ( 11) 00:21:33.829 14105.844 - 14168.259: 94.4406% ( 12) 00:21:33.829 14168.259 - 14230.674: 94.5097% ( 8) 00:21:33.829 14230.674 - 14293.090: 94.6046% ( 11) 00:21:33.829 14293.090 - 14355.505: 94.6564% ( 6) 00:21:33.829 14355.505 - 14417.920: 94.6823% ( 3) 00:21:33.829 14417.920 - 14480.335: 94.6996% ( 2) 00:21:33.829 14480.335 - 14542.750: 94.7255% ( 3) 00:21:33.829 14542.750 - 14605.166: 94.7427% ( 2) 00:21:33.829 14605.166 - 14667.581: 94.7600% ( 2) 00:21:33.829 14667.581 - 14729.996: 94.7859% ( 3) 00:21:33.829 14729.996 - 14792.411: 94.8032% ( 2) 00:21:33.829 14792.411 - 14854.827: 94.8204% ( 2) 00:21:33.829 14854.827 - 14917.242: 94.8377% ( 2) 00:21:33.829 14917.242 - 14979.657: 94.8636% ( 3) 00:21:33.829 14979.657 - 15042.072: 94.8809% ( 2) 00:21:33.829 15042.072 - 15104.488: 94.9068% ( 3) 00:21:33.829 15104.488 - 15166.903: 94.9845% ( 9) 00:21:33.829 15166.903 - 15229.318: 95.0535% ( 8) 00:21:33.829 15229.318 - 15291.733: 95.1571% ( 12) 00:21:33.829 15291.733 - 15354.149: 95.2262% ( 8) 00:21:33.829 15354.149 - 15416.564: 95.3039% ( 9) 00:21:33.829 15416.564 - 15478.979: 95.3816% ( 9) 00:21:33.829 15478.979 - 15541.394: 95.4593% ( 9) 00:21:33.829 15541.394 - 15603.810: 95.5197% ( 7) 00:21:33.829 15603.810 - 15666.225: 95.5887% ( 8) 00:21:33.829 15666.225 - 15728.640: 95.6492% ( 7) 00:21:33.829 15728.640 - 15791.055: 95.7096% ( 7) 00:21:33.829 15791.055 - 15853.470: 95.7873% ( 9) 00:21:33.829 15853.470 - 15915.886: 95.8909% ( 12) 00:21:33.829 15915.886 - 15978.301: 96.0117% ( 14) 00:21:33.829 15978.301 - 16103.131: 96.2707% ( 30) 00:21:33.829 16103.131 - 16227.962: 96.5383% ( 31) 00:21:33.829 16227.962 - 16352.792: 96.7541% ( 25) 00:21:33.829 16352.792 - 16477.623: 96.9268% ( 20) 00:21:33.829 16477.623 - 16602.453: 97.0994% ( 20) 00:21:33.829 16602.453 - 16727.284: 97.2721% ( 20) 00:21:33.829 16727.284 - 16852.114: 97.4620% ( 22) 00:21:33.829 16852.114 - 16976.945: 97.6606% ( 23) 00:21:33.829 16976.945 - 17101.775: 97.8332% ( 20) 00:21:33.829 17101.775 - 17226.606: 97.9627% ( 15) 00:21:33.829 17226.606 - 17351.436: 98.0404% ( 9) 00:21:33.829 17351.436 - 17476.267: 98.1095% ( 8) 00:21:33.829 17476.267 - 17601.097: 98.1785% ( 8) 00:21:33.829 17601.097 - 17725.928: 98.2476% ( 8) 00:21:33.829 17725.928 - 17850.758: 98.3253% ( 9) 00:21:33.829 17850.758 - 17975.589: 98.3425% ( 2) 00:21:33.829 18599.741 - 18724.571: 98.3684% ( 3) 00:21:33.829 18724.571 - 18849.402: 98.4375% ( 8) 00:21:33.829 18849.402 - 18974.232: 98.5066% ( 8) 00:21:33.829 18974.232 - 19099.063: 98.5756% ( 8) 00:21:33.829 19099.063 - 19223.893: 98.6533% ( 9) 00:21:33.829 19223.893 - 19348.724: 98.7224% ( 8) 00:21:33.829 19348.724 - 19473.554: 98.7914% ( 8) 00:21:33.829 19473.554 - 19598.385: 98.8691% ( 9) 00:21:33.829 19598.385 - 19723.215: 98.8950% ( 3) 00:21:33.829 28835.840 - 28960.670: 98.9037% ( 1) 00:21:33.829 28960.670 - 29085.501: 98.9296% ( 3) 00:21:33.829 29085.501 - 29210.331: 98.9555% ( 3) 00:21:33.829 29210.331 - 29335.162: 98.9900% ( 4) 00:21:33.829 29335.162 - 29459.992: 99.0159% ( 3) 00:21:33.829 29459.992 - 29584.823: 99.0504% ( 4) 00:21:33.829 29584.823 - 29709.653: 99.0849% ( 4) 00:21:33.829 29709.653 - 29834.484: 99.1195% ( 4) 00:21:33.829 29834.484 - 29959.314: 99.1540% ( 4) 00:21:33.829 29959.314 - 30084.145: 99.1885% ( 4) 00:21:33.829 30084.145 - 30208.975: 99.2144% ( 3) 00:21:33.829 30208.975 - 30333.806: 99.2490% ( 4) 00:21:33.829 30333.806 - 30458.636: 99.2835% ( 4) 00:21:33.829 30458.636 - 30583.467: 99.3094% ( 3) 00:21:33.829 30583.467 - 30708.297: 99.3439% ( 4) 00:21:33.829 30708.297 - 30833.128: 99.3698% ( 3) 00:21:33.829 30833.128 - 30957.958: 99.4044% ( 4) 00:21:33.829 30957.958 - 31082.789: 99.4302% ( 3) 00:21:33.829 31082.789 - 31207.619: 99.4475% ( 2) 00:21:33.829 35701.516 - 35951.177: 99.5166% ( 8) 00:21:33.829 35951.177 - 36200.838: 99.5684% ( 6) 00:21:33.829 36200.838 - 36450.499: 99.6288% ( 7) 00:21:33.829 36450.499 - 36700.160: 99.6892% ( 7) 00:21:33.829 36700.160 - 36949.821: 99.7497% ( 7) 00:21:33.829 36949.821 - 37199.482: 99.8187% ( 8) 00:21:33.829 37199.482 - 37449.143: 99.8791% ( 7) 00:21:33.829 37449.143 - 37698.804: 99.9396% ( 7) 00:21:33.829 37698.804 - 37948.465: 100.0000% ( 7) 00:21:33.829 00:21:33.829 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:21:33.829 ============================================================================== 00:21:33.829 Range in us Cumulative IO count 00:21:33.829 8238.811 - 8301.227: 0.0859% ( 10) 00:21:33.829 8301.227 - 8363.642: 0.2490% ( 19) 00:21:33.829 8363.642 - 8426.057: 0.5666% ( 37) 00:21:33.829 8426.057 - 8488.472: 1.0045% ( 51) 00:21:33.829 8488.472 - 8550.888: 1.5453% ( 63) 00:21:33.829 8550.888 - 8613.303: 2.3867% ( 98) 00:21:33.829 8613.303 - 8675.718: 3.1765% ( 92) 00:21:33.829 8675.718 - 8738.133: 4.3613% ( 138) 00:21:33.829 8738.133 - 8800.549: 5.5460% ( 138) 00:21:33.829 8800.549 - 8862.964: 6.8080% ( 147) 00:21:33.829 8862.964 - 8925.379: 8.3276% ( 177) 00:21:33.829 8925.379 - 8987.794: 9.8214% ( 174) 00:21:33.829 8987.794 - 9050.210: 11.5213% ( 198) 00:21:33.829 9050.210 - 9112.625: 13.2040% ( 196) 00:21:33.829 9112.625 - 9175.040: 14.9296% ( 201) 00:21:33.829 9175.040 - 9237.455: 16.6380% ( 199) 00:21:33.829 9237.455 - 9299.870: 18.3293% ( 197) 00:21:33.829 9299.870 - 9362.286: 19.8231% ( 174) 00:21:33.829 9362.286 - 9424.701: 21.2655% ( 168) 00:21:33.829 9424.701 - 9487.116: 22.4931% ( 143) 00:21:33.829 9487.116 - 9549.531: 23.7552% ( 147) 00:21:33.829 9549.531 - 9611.947: 24.9657% ( 141) 00:21:33.829 9611.947 - 9674.362: 25.9873% ( 119) 00:21:33.830 9674.362 - 9736.777: 26.9832% ( 116) 00:21:33.830 9736.777 - 9799.192: 28.1679% ( 138) 00:21:33.830 9799.192 - 9861.608: 29.3355% ( 136) 00:21:33.830 9861.608 - 9924.023: 30.5288% ( 139) 00:21:33.830 9924.023 - 9986.438: 31.7651% ( 144) 00:21:33.830 9986.438 - 10048.853: 33.1559% ( 162) 00:21:33.830 10048.853 - 10111.269: 34.6841% ( 178) 00:21:33.830 10111.269 - 10173.684: 36.4269% ( 203) 00:21:33.830 10173.684 - 10236.099: 38.2812% ( 216) 00:21:33.830 10236.099 - 10298.514: 40.2387% ( 228) 00:21:33.830 10298.514 - 10360.930: 42.4365% ( 256) 00:21:33.830 10360.930 - 10423.345: 44.5913% ( 251) 00:21:33.830 10423.345 - 10485.760: 46.8492% ( 263) 00:21:33.830 10485.760 - 10548.175: 49.1243% ( 265) 00:21:33.830 10548.175 - 10610.590: 51.5196% ( 279) 00:21:33.830 10610.590 - 10673.006: 54.0179% ( 291) 00:21:33.830 10673.006 - 10735.421: 56.3959% ( 277) 00:21:33.830 10735.421 - 10797.836: 58.8599% ( 287) 00:21:33.830 10797.836 - 10860.251: 61.2208% ( 275) 00:21:33.830 10860.251 - 10922.667: 63.5731% ( 274) 00:21:33.830 10922.667 - 10985.082: 65.7023% ( 248) 00:21:33.830 10985.082 - 11047.497: 67.6339% ( 225) 00:21:33.830 11047.497 - 11109.912: 69.4712% ( 214) 00:21:33.830 11109.912 - 11172.328: 71.1882% ( 200) 00:21:33.830 11172.328 - 11234.743: 72.7936% ( 187) 00:21:33.830 11234.743 - 11297.158: 74.3046% ( 176) 00:21:33.830 11297.158 - 11359.573: 75.6954% ( 162) 00:21:33.830 11359.573 - 11421.989: 76.8115% ( 130) 00:21:33.830 11421.989 - 11484.404: 77.8159% ( 117) 00:21:33.830 11484.404 - 11546.819: 78.6144% ( 93) 00:21:33.830 11546.819 - 11609.234: 79.3355% ( 84) 00:21:33.830 11609.234 - 11671.650: 80.1253% ( 92) 00:21:33.830 11671.650 - 11734.065: 80.9323% ( 94) 00:21:33.830 11734.065 - 11796.480: 81.7308% ( 93) 00:21:33.830 11796.480 - 11858.895: 82.5034% ( 90) 00:21:33.830 11858.895 - 11921.310: 83.2074% ( 82) 00:21:33.830 11921.310 - 11983.726: 83.7826% ( 67) 00:21:33.830 11983.726 - 12046.141: 84.2977% ( 60) 00:21:33.830 12046.141 - 12108.556: 84.8300% ( 62) 00:21:33.830 12108.556 - 12170.971: 85.3108% ( 56) 00:21:33.830 12170.971 - 12233.387: 85.7830% ( 55) 00:21:33.830 12233.387 - 12295.802: 86.2552% ( 55) 00:21:33.830 12295.802 - 12358.217: 86.7788% ( 61) 00:21:33.830 12358.217 - 12420.632: 87.2940% ( 60) 00:21:33.830 12420.632 - 12483.048: 87.8177% ( 61) 00:21:33.830 12483.048 - 12545.463: 88.3585% ( 63) 00:21:33.830 12545.463 - 12607.878: 88.9337% ( 67) 00:21:33.830 12607.878 - 12670.293: 89.4317% ( 58) 00:21:33.830 12670.293 - 12732.709: 89.8695% ( 51) 00:21:33.830 12732.709 - 12795.124: 90.2473% ( 44) 00:21:33.830 12795.124 - 12857.539: 90.6250% ( 44) 00:21:33.830 12857.539 - 12919.954: 90.9770% ( 41) 00:21:33.830 12919.954 - 12982.370: 91.3118% ( 39) 00:21:33.830 12982.370 - 13044.785: 91.6466% ( 39) 00:21:33.830 13044.785 - 13107.200: 91.9557% ( 36) 00:21:33.830 13107.200 - 13169.615: 92.2218% ( 31) 00:21:33.830 13169.615 - 13232.030: 92.4622% ( 28) 00:21:33.830 13232.030 - 13294.446: 92.7112% ( 29) 00:21:33.830 13294.446 - 13356.861: 92.8915% ( 21) 00:21:33.830 13356.861 - 13419.276: 93.0288% ( 16) 00:21:33.830 13419.276 - 13481.691: 93.1834% ( 18) 00:21:33.830 13481.691 - 13544.107: 93.3036% ( 14) 00:21:33.830 13544.107 - 13606.522: 93.4238% ( 14) 00:21:33.830 13606.522 - 13668.937: 93.5182% ( 11) 00:21:33.830 13668.937 - 13731.352: 93.6126% ( 11) 00:21:33.830 13731.352 - 13793.768: 93.7071% ( 11) 00:21:33.830 13793.768 - 13856.183: 93.7929% ( 10) 00:21:33.830 13856.183 - 13918.598: 93.8874% ( 11) 00:21:33.830 13918.598 - 13981.013: 93.9818% ( 11) 00:21:33.830 13981.013 - 14043.429: 94.0591% ( 9) 00:21:33.830 14043.429 - 14105.844: 94.1535% ( 11) 00:21:33.830 14105.844 - 14168.259: 94.2565% ( 12) 00:21:33.830 14168.259 - 14230.674: 94.3510% ( 11) 00:21:33.830 14230.674 - 14293.090: 94.4282% ( 9) 00:21:33.830 14293.090 - 14355.505: 94.4883% ( 7) 00:21:33.830 14355.505 - 14417.920: 94.5656% ( 9) 00:21:33.830 14417.920 - 14480.335: 94.5999% ( 4) 00:21:33.830 14480.335 - 14542.750: 94.6429% ( 5) 00:21:33.830 14542.750 - 14605.166: 94.7030% ( 7) 00:21:33.830 14605.166 - 14667.581: 94.7630% ( 7) 00:21:33.830 14667.581 - 14729.996: 94.8231% ( 7) 00:21:33.830 14729.996 - 14792.411: 94.8747% ( 6) 00:21:33.830 14792.411 - 14854.827: 94.9262% ( 6) 00:21:33.830 14854.827 - 14917.242: 94.9777% ( 6) 00:21:33.830 14917.242 - 14979.657: 95.0206% ( 5) 00:21:33.830 14979.657 - 15042.072: 95.0721% ( 6) 00:21:33.830 15042.072 - 15104.488: 95.1322% ( 7) 00:21:33.830 15104.488 - 15166.903: 95.1837% ( 6) 00:21:33.830 15166.903 - 15229.318: 95.2438% ( 7) 00:21:33.830 15229.318 - 15291.733: 95.3297% ( 10) 00:21:33.830 15291.733 - 15354.149: 95.4327% ( 12) 00:21:33.830 15354.149 - 15416.564: 95.5529% ( 14) 00:21:33.830 15416.564 - 15478.979: 95.6731% ( 14) 00:21:33.830 15478.979 - 15541.394: 95.7847% ( 13) 00:21:33.830 15541.394 - 15603.810: 95.8877% ( 12) 00:21:33.830 15603.810 - 15666.225: 95.9650% ( 9) 00:21:33.830 15666.225 - 15728.640: 96.0508% ( 10) 00:21:33.830 15728.640 - 15791.055: 96.1367% ( 10) 00:21:33.830 15791.055 - 15853.470: 96.2139% ( 9) 00:21:33.830 15853.470 - 15915.886: 96.2998% ( 10) 00:21:33.830 15915.886 - 15978.301: 96.3771% ( 9) 00:21:33.830 15978.301 - 16103.131: 96.5488% ( 20) 00:21:33.830 16103.131 - 16227.962: 96.7119% ( 19) 00:21:33.830 16227.962 - 16352.792: 96.9008% ( 22) 00:21:33.830 16352.792 - 16477.623: 97.0982% ( 23) 00:21:33.830 16477.623 - 16602.453: 97.2356% ( 16) 00:21:33.830 16602.453 - 16727.284: 97.3558% ( 14) 00:21:33.830 16727.284 - 16852.114: 97.4674% ( 13) 00:21:33.830 16852.114 - 16976.945: 97.5704% ( 12) 00:21:33.830 16976.945 - 17101.775: 97.6391% ( 8) 00:21:33.830 17101.775 - 17226.606: 97.6992% ( 7) 00:21:33.830 17226.606 - 17351.436: 97.7593% ( 7) 00:21:33.830 17351.436 - 17476.267: 97.8194% ( 7) 00:21:33.830 17476.267 - 17601.097: 97.8880% ( 8) 00:21:33.830 17601.097 - 17725.928: 97.9567% ( 8) 00:21:33.830 17725.928 - 17850.758: 98.0168% ( 7) 00:21:33.830 17850.758 - 17975.589: 98.0941% ( 9) 00:21:33.830 17975.589 - 18100.419: 98.1714% ( 9) 00:21:33.830 18100.419 - 18225.250: 98.2315% ( 7) 00:21:33.830 18225.250 - 18350.080: 98.3173% ( 10) 00:21:33.830 18350.080 - 18474.910: 98.4289% ( 13) 00:21:33.830 18474.910 - 18599.741: 98.4976% ( 8) 00:21:33.830 18599.741 - 18724.571: 98.5749% ( 9) 00:21:33.830 18724.571 - 18849.402: 98.6435% ( 8) 00:21:33.830 18849.402 - 18974.232: 98.7208% ( 9) 00:21:33.830 18974.232 - 19099.063: 98.7895% ( 8) 00:21:33.830 19099.063 - 19223.893: 98.8668% ( 9) 00:21:33.830 19223.893 - 19348.724: 98.9011% ( 4) 00:21:33.830 21221.181 - 21346.011: 98.9097% ( 1) 00:21:33.830 21346.011 - 21470.842: 98.9354% ( 3) 00:21:33.830 21470.842 - 21595.672: 98.9698% ( 4) 00:21:33.830 21595.672 - 21720.503: 98.9955% ( 3) 00:21:33.830 21720.503 - 21845.333: 99.0213% ( 3) 00:21:33.830 21845.333 - 21970.164: 99.0556% ( 4) 00:21:33.830 21970.164 - 22094.994: 99.0900% ( 4) 00:21:33.830 22094.994 - 22219.825: 99.1157% ( 3) 00:21:33.830 22219.825 - 22344.655: 99.1415% ( 3) 00:21:33.830 22344.655 - 22469.486: 99.1758% ( 4) 00:21:33.830 22469.486 - 22594.316: 99.2102% ( 4) 00:21:33.830 22594.316 - 22719.147: 99.2273% ( 2) 00:21:33.830 22719.147 - 22843.977: 99.2617% ( 4) 00:21:33.830 22843.977 - 22968.808: 99.2960% ( 4) 00:21:33.830 22968.808 - 23093.638: 99.3304% ( 4) 00:21:33.830 23093.638 - 23218.469: 99.3561% ( 3) 00:21:33.830 23218.469 - 23343.299: 99.3905% ( 4) 00:21:33.830 23343.299 - 23468.130: 99.4248% ( 4) 00:21:33.830 23468.130 - 23592.960: 99.4505% ( 3) 00:21:33.830 28086.857 - 28211.688: 99.4763% ( 3) 00:21:33.830 28211.688 - 28336.518: 99.5106% ( 4) 00:21:33.830 28336.518 - 28461.349: 99.5364% ( 3) 00:21:33.830 28461.349 - 28586.179: 99.5622% ( 3) 00:21:33.830 28586.179 - 28711.010: 99.5965% ( 4) 00:21:33.830 28711.010 - 28835.840: 99.6308% ( 4) 00:21:33.830 28835.840 - 28960.670: 99.6566% ( 3) 00:21:33.830 28960.670 - 29085.501: 99.6909% ( 4) 00:21:33.830 29085.501 - 29210.331: 99.7253% ( 4) 00:21:33.830 29210.331 - 29335.162: 99.7596% ( 4) 00:21:33.830 29335.162 - 29459.992: 99.7854% ( 3) 00:21:33.830 29459.992 - 29584.823: 99.8197% ( 4) 00:21:33.830 29584.823 - 29709.653: 99.8541% ( 4) 00:21:33.830 29709.653 - 29834.484: 99.8884% ( 4) 00:21:33.830 29834.484 - 29959.314: 99.9227% ( 4) 00:21:33.830 29959.314 - 30084.145: 99.9571% ( 4) 00:21:33.830 30084.145 - 30208.975: 99.9914% ( 4) 00:21:33.830 30208.975 - 30333.806: 100.0000% ( 1) 00:21:33.830 00:21:33.830 14:40:42 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:21:35.207 Initializing NVMe Controllers 00:21:35.207 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:35.207 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:35.207 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:35.207 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:35.207 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:35.207 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:21:35.207 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:21:35.207 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:21:35.207 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:21:35.207 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:21:35.207 Initialization complete. Launching workers. 00:21:35.207 ======================================================== 00:21:35.207 Latency(us) 00:21:35.207 Device Information : IOPS MiB/s Average min max 00:21:35.207 PCIE (0000:00:10.0) NSID 1 from core 0: 8760.83 102.67 14662.13 10758.20 49625.99 00:21:35.207 PCIE (0000:00:11.0) NSID 1 from core 0: 8760.83 102.67 14618.93 10810.76 45881.07 00:21:35.207 PCIE (0000:00:13.0) NSID 1 from core 0: 8760.83 102.67 14578.07 11047.10 42577.75 00:21:35.207 PCIE (0000:00:12.0) NSID 1 from core 0: 8760.83 102.67 14539.03 10997.75 38967.90 00:21:35.207 PCIE (0000:00:12.0) NSID 2 from core 0: 8824.78 103.42 14396.07 10576.74 30110.71 00:21:35.207 PCIE (0000:00:12.0) NSID 3 from core 0: 8824.78 103.42 14359.67 10824.54 26669.02 00:21:35.207 ======================================================== 00:21:35.207 Total : 52692.90 617.49 14525.29 10576.74 49625.99 00:21:35.207 00:21:35.207 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:35.207 ================================================================================= 00:21:35.207 1.00000% : 11234.743us 00:21:35.207 10.00000% : 12732.709us 00:21:35.207 25.00000% : 13481.691us 00:21:35.207 50.00000% : 14230.674us 00:21:35.207 75.00000% : 15104.488us 00:21:35.207 90.00000% : 15978.301us 00:21:35.207 95.00000% : 16727.284us 00:21:35.207 98.00000% : 18225.250us 00:21:35.207 99.00000% : 40944.396us 00:21:35.207 99.50000% : 47685.242us 00:21:35.207 99.90000% : 49432.869us 00:21:35.207 99.99000% : 49682.530us 00:21:35.207 99.99900% : 49682.530us 00:21:35.207 99.99990% : 49682.530us 00:21:35.207 99.99999% : 49682.530us 00:21:35.207 00:21:35.207 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:21:35.207 ================================================================================= 00:21:35.208 1.00000% : 11359.573us 00:21:35.208 10.00000% : 12795.124us 00:21:35.208 25.00000% : 13544.107us 00:21:35.208 50.00000% : 14230.674us 00:21:35.208 75.00000% : 15104.488us 00:21:35.208 90.00000% : 15978.301us 00:21:35.208 95.00000% : 16727.284us 00:21:35.208 98.00000% : 18100.419us 00:21:35.208 99.00000% : 36949.821us 00:21:35.208 99.50000% : 44189.989us 00:21:35.208 99.90000% : 45687.954us 00:21:35.208 99.99000% : 45937.615us 00:21:35.208 99.99900% : 45937.615us 00:21:35.208 99.99990% : 45937.615us 00:21:35.208 99.99999% : 45937.615us 00:21:35.208 00:21:35.208 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:21:35.208 ================================================================================= 00:21:35.208 1.00000% : 11421.989us 00:21:35.208 10.00000% : 12795.124us 00:21:35.208 25.00000% : 13481.691us 00:21:35.208 50.00000% : 14168.259us 00:21:35.208 75.00000% : 15104.488us 00:21:35.208 90.00000% : 16103.131us 00:21:35.208 95.00000% : 16727.284us 00:21:35.208 98.00000% : 17850.758us 00:21:35.208 99.00000% : 33953.890us 00:21:35.208 99.50000% : 40944.396us 00:21:35.208 99.90000% : 42442.362us 00:21:35.208 99.99000% : 42692.023us 00:21:35.208 99.99900% : 42692.023us 00:21:35.208 99.99990% : 42692.023us 00:21:35.208 99.99999% : 42692.023us 00:21:35.208 00:21:35.208 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:21:35.208 ================================================================================= 00:21:35.208 1.00000% : 11484.404us 00:21:35.208 10.00000% : 12795.124us 00:21:35.208 25.00000% : 13544.107us 00:21:35.208 50.00000% : 14168.259us 00:21:35.208 75.00000% : 15166.903us 00:21:35.208 90.00000% : 16103.131us 00:21:35.208 95.00000% : 16727.284us 00:21:35.208 98.00000% : 18350.080us 00:21:35.208 99.00000% : 30458.636us 00:21:35.208 99.50000% : 37449.143us 00:21:35.208 99.90000% : 38697.448us 00:21:35.208 99.99000% : 39196.770us 00:21:35.208 99.99900% : 39196.770us 00:21:35.208 99.99990% : 39196.770us 00:21:35.208 99.99999% : 39196.770us 00:21:35.208 00:21:35.208 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:21:35.208 ================================================================================= 00:21:35.208 1.00000% : 11359.573us 00:21:35.208 10.00000% : 12732.709us 00:21:35.208 25.00000% : 13544.107us 00:21:35.208 50.00000% : 14168.259us 00:21:35.208 75.00000% : 15104.488us 00:21:35.208 90.00000% : 15978.301us 00:21:35.208 95.00000% : 16602.453us 00:21:35.208 98.00000% : 18724.571us 00:21:35.208 99.00000% : 21720.503us 00:21:35.208 99.50000% : 28336.518us 00:21:35.208 99.90000% : 29834.484us 00:21:35.208 99.99000% : 30208.975us 00:21:35.208 99.99900% : 30208.975us 00:21:35.208 99.99990% : 30208.975us 00:21:35.208 99.99999% : 30208.975us 00:21:35.208 00:21:35.208 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:21:35.208 ================================================================================= 00:21:35.208 1.00000% : 11297.158us 00:21:35.208 10.00000% : 12857.539us 00:21:35.208 25.00000% : 13544.107us 00:21:35.208 50.00000% : 14230.674us 00:21:35.208 75.00000% : 15104.488us 00:21:35.208 90.00000% : 15978.301us 00:21:35.208 95.00000% : 16602.453us 00:21:35.208 98.00000% : 18225.250us 00:21:35.208 99.00000% : 19972.876us 00:21:35.208 99.50000% : 24966.095us 00:21:35.208 99.90000% : 26339.230us 00:21:35.208 99.99000% : 26713.722us 00:21:35.208 99.99900% : 26713.722us 00:21:35.208 99.99990% : 26713.722us 00:21:35.208 99.99999% : 26713.722us 00:21:35.208 00:21:35.208 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:21:35.208 ============================================================================== 00:21:35.208 Range in us Cumulative IO count 00:21:35.208 10735.421 - 10797.836: 0.0228% ( 2) 00:21:35.208 10860.251 - 10922.667: 0.1026% ( 7) 00:21:35.208 10922.667 - 10985.082: 0.2623% ( 14) 00:21:35.208 10985.082 - 11047.497: 0.4904% ( 20) 00:21:35.208 11047.497 - 11109.912: 0.6501% ( 14) 00:21:35.208 11109.912 - 11172.328: 0.8782% ( 20) 00:21:35.208 11172.328 - 11234.743: 1.2089% ( 29) 00:21:35.208 11234.743 - 11297.158: 1.3116% ( 9) 00:21:35.208 11297.158 - 11359.573: 1.5511% ( 21) 00:21:35.208 11359.573 - 11421.989: 1.8704% ( 28) 00:21:35.208 11421.989 - 11484.404: 2.3609% ( 43) 00:21:35.208 11484.404 - 11546.819: 2.7144% ( 31) 00:21:35.208 11546.819 - 11609.234: 2.9653% ( 22) 00:21:35.208 11609.234 - 11671.650: 3.2276% ( 23) 00:21:35.208 11671.650 - 11734.065: 3.5812% ( 31) 00:21:35.208 11734.065 - 11796.480: 3.9005% ( 28) 00:21:35.208 11796.480 - 11858.895: 4.2085% ( 27) 00:21:35.208 11858.895 - 11921.310: 4.7217% ( 45) 00:21:35.208 11921.310 - 11983.726: 5.1209% ( 35) 00:21:35.208 11983.726 - 12046.141: 5.4745% ( 31) 00:21:35.208 12046.141 - 12108.556: 5.7368% ( 23) 00:21:35.208 12108.556 - 12170.971: 6.0561% ( 28) 00:21:35.208 12170.971 - 12233.387: 6.3526% ( 26) 00:21:35.208 12233.387 - 12295.802: 6.7632% ( 36) 00:21:35.208 12295.802 - 12358.217: 7.0712% ( 27) 00:21:35.208 12358.217 - 12420.632: 7.3221% ( 22) 00:21:35.208 12420.632 - 12483.048: 7.7441% ( 37) 00:21:35.208 12483.048 - 12545.463: 8.3828% ( 56) 00:21:35.208 12545.463 - 12607.878: 9.0328% ( 57) 00:21:35.208 12607.878 - 12670.293: 9.6601% ( 55) 00:21:35.208 12670.293 - 12732.709: 10.6638% ( 88) 00:21:35.208 12732.709 - 12795.124: 11.5078% ( 74) 00:21:35.208 12795.124 - 12857.539: 12.3859% ( 77) 00:21:35.208 12857.539 - 12919.954: 13.4466% ( 93) 00:21:35.208 12919.954 - 12982.370: 14.3248% ( 77) 00:21:35.208 12982.370 - 13044.785: 15.3285% ( 88) 00:21:35.208 13044.785 - 13107.200: 16.2409% ( 80) 00:21:35.208 13107.200 - 13169.615: 17.3586% ( 98) 00:21:35.208 13169.615 - 13232.030: 18.7272% ( 120) 00:21:35.208 13232.030 - 13294.446: 20.5748% ( 162) 00:21:35.208 13294.446 - 13356.861: 22.2400% ( 146) 00:21:35.208 13356.861 - 13419.276: 24.4069% ( 190) 00:21:35.208 13419.276 - 13481.691: 26.2203% ( 159) 00:21:35.208 13481.691 - 13544.107: 28.0794% ( 163) 00:21:35.208 13544.107 - 13606.522: 29.9954% ( 168) 00:21:35.208 13606.522 - 13668.937: 32.1168% ( 186) 00:21:35.208 13668.937 - 13731.352: 33.9530% ( 161) 00:21:35.208 13731.352 - 13793.768: 36.2682% ( 203) 00:21:35.208 13793.768 - 13856.183: 38.2071% ( 170) 00:21:35.208 13856.183 - 13918.598: 40.2600% ( 180) 00:21:35.208 13918.598 - 13981.013: 42.1419% ( 165) 00:21:35.208 13981.013 - 14043.429: 44.1378% ( 175) 00:21:35.208 14043.429 - 14105.844: 46.2705% ( 187) 00:21:35.208 14105.844 - 14168.259: 48.5059% ( 196) 00:21:35.208 14168.259 - 14230.674: 50.8326% ( 204) 00:21:35.208 14230.674 - 14293.090: 53.3987% ( 225) 00:21:35.208 14293.090 - 14355.505: 55.5771% ( 191) 00:21:35.208 14355.505 - 14417.920: 57.7669% ( 192) 00:21:35.208 14417.920 - 14480.335: 60.0593% ( 201) 00:21:35.208 14480.335 - 14542.750: 62.0096% ( 171) 00:21:35.208 14542.750 - 14605.166: 63.7089% ( 149) 00:21:35.208 14605.166 - 14667.581: 65.4197% ( 150) 00:21:35.208 14667.581 - 14729.996: 67.0506% ( 143) 00:21:35.208 14729.996 - 14792.411: 68.7044% ( 145) 00:21:35.208 14792.411 - 14854.827: 70.2099% ( 132) 00:21:35.208 14854.827 - 14917.242: 71.8864% ( 147) 00:21:35.208 14917.242 - 14979.657: 73.1524% ( 111) 00:21:35.208 14979.657 - 15042.072: 74.3613% ( 106) 00:21:35.208 15042.072 - 15104.488: 75.8326% ( 129) 00:21:35.208 15104.488 - 15166.903: 77.3609% ( 134) 00:21:35.208 15166.903 - 15229.318: 78.7067% ( 118) 00:21:35.208 15229.318 - 15291.733: 79.9384% ( 108) 00:21:35.208 15291.733 - 15354.149: 81.2158% ( 112) 00:21:35.208 15354.149 - 15416.564: 82.4247% ( 106) 00:21:35.208 15416.564 - 15478.979: 83.5082% ( 95) 00:21:35.208 15478.979 - 15541.394: 84.4891% ( 86) 00:21:35.208 15541.394 - 15603.810: 85.5953% ( 97) 00:21:35.208 15603.810 - 15666.225: 86.4507% ( 75) 00:21:35.208 15666.225 - 15728.640: 87.2719% ( 72) 00:21:35.208 15728.640 - 15791.055: 88.1045% ( 73) 00:21:35.208 15791.055 - 15853.470: 88.7660% ( 58) 00:21:35.208 15853.470 - 15915.886: 89.3818% ( 54) 00:21:35.208 15915.886 - 15978.301: 90.0890% ( 62) 00:21:35.208 15978.301 - 16103.131: 91.1953% ( 97) 00:21:35.208 16103.131 - 16227.962: 92.1533% ( 84) 00:21:35.208 16227.962 - 16352.792: 93.0999% ( 83) 00:21:35.208 16352.792 - 16477.623: 94.0693% ( 85) 00:21:35.208 16477.623 - 16602.453: 94.9247% ( 75) 00:21:35.208 16602.453 - 16727.284: 95.5520% ( 55) 00:21:35.208 16727.284 - 16852.114: 96.0995% ( 48) 00:21:35.208 16852.114 - 16976.945: 96.4530% ( 31) 00:21:35.208 16976.945 - 17101.775: 96.7153% ( 23) 00:21:35.208 17101.775 - 17226.606: 97.0005% ( 25) 00:21:35.208 17226.606 - 17351.436: 97.2172% ( 19) 00:21:35.208 17351.436 - 17476.267: 97.3654% ( 13) 00:21:35.209 17476.267 - 17601.097: 97.4909% ( 11) 00:21:35.209 17601.097 - 17725.928: 97.6163% ( 11) 00:21:35.209 17725.928 - 17850.758: 97.7646% ( 13) 00:21:35.209 17850.758 - 17975.589: 97.8672% ( 9) 00:21:35.209 17975.589 - 18100.419: 97.9471% ( 7) 00:21:35.209 18100.419 - 18225.250: 98.0383% ( 8) 00:21:35.209 18225.250 - 18350.080: 98.1296% ( 8) 00:21:35.209 18350.080 - 18474.910: 98.2208% ( 8) 00:21:35.209 18474.910 - 18599.741: 98.2892% ( 6) 00:21:35.209 18599.741 - 18724.571: 98.3691% ( 7) 00:21:35.209 18724.571 - 18849.402: 98.4489% ( 7) 00:21:35.209 18849.402 - 18974.232: 98.5173% ( 6) 00:21:35.209 18974.232 - 19099.063: 98.5401% ( 2) 00:21:35.209 38947.109 - 39196.770: 98.5858% ( 4) 00:21:35.209 39196.770 - 39446.430: 98.6542% ( 6) 00:21:35.209 39446.430 - 39696.091: 98.7112% ( 5) 00:21:35.209 39696.091 - 39945.752: 98.7682% ( 5) 00:21:35.209 39945.752 - 40195.413: 98.8481% ( 7) 00:21:35.209 40195.413 - 40445.074: 98.9051% ( 5) 00:21:35.209 40445.074 - 40694.735: 98.9735% ( 6) 00:21:35.209 40694.735 - 40944.396: 99.0420% ( 6) 00:21:35.209 40944.396 - 41194.057: 99.1104% ( 6) 00:21:35.209 41194.057 - 41443.718: 99.1674% ( 5) 00:21:35.209 41443.718 - 41693.379: 99.2359% ( 6) 00:21:35.209 41693.379 - 41943.040: 99.2701% ( 3) 00:21:35.209 46686.598 - 46936.259: 99.3271% ( 5) 00:21:35.209 46936.259 - 47185.920: 99.3841% ( 5) 00:21:35.209 47185.920 - 47435.581: 99.4526% ( 6) 00:21:35.209 47435.581 - 47685.242: 99.5096% ( 5) 00:21:35.209 47685.242 - 47934.903: 99.5780% ( 6) 00:21:35.209 47934.903 - 48184.564: 99.6464% ( 6) 00:21:35.209 48184.564 - 48434.225: 99.7149% ( 6) 00:21:35.209 48434.225 - 48683.886: 99.7719% ( 5) 00:21:35.209 48683.886 - 48933.547: 99.8403% ( 6) 00:21:35.209 48933.547 - 49183.208: 99.8974% ( 5) 00:21:35.209 49183.208 - 49432.869: 99.9658% ( 6) 00:21:35.209 49432.869 - 49682.530: 100.0000% ( 3) 00:21:35.209 00:21:35.209 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:21:35.209 ============================================================================== 00:21:35.209 Range in us Cumulative IO count 00:21:35.209 10797.836 - 10860.251: 0.0114% ( 1) 00:21:35.209 10922.667 - 10985.082: 0.0912% ( 7) 00:21:35.209 10985.082 - 11047.497: 0.1711% ( 7) 00:21:35.209 11047.497 - 11109.912: 0.2509% ( 7) 00:21:35.209 11109.912 - 11172.328: 0.4106% ( 14) 00:21:35.209 11172.328 - 11234.743: 0.6159% ( 18) 00:21:35.209 11234.743 - 11297.158: 0.8782% ( 23) 00:21:35.209 11297.158 - 11359.573: 1.2546% ( 33) 00:21:35.209 11359.573 - 11421.989: 1.6195% ( 32) 00:21:35.209 11421.989 - 11484.404: 1.9959% ( 33) 00:21:35.209 11484.404 - 11546.819: 2.3266% ( 29) 00:21:35.209 11546.819 - 11609.234: 2.6004% ( 24) 00:21:35.209 11609.234 - 11671.650: 2.9881% ( 34) 00:21:35.209 11671.650 - 11734.065: 3.1706% ( 16) 00:21:35.209 11734.065 - 11796.480: 3.3303% ( 14) 00:21:35.209 11796.480 - 11858.895: 3.5812% ( 22) 00:21:35.209 11858.895 - 11921.310: 3.9690% ( 34) 00:21:35.209 11921.310 - 11983.726: 4.4252% ( 40) 00:21:35.209 11983.726 - 12046.141: 4.9042% ( 42) 00:21:35.209 12046.141 - 12108.556: 5.4516% ( 48) 00:21:35.209 12108.556 - 12170.971: 5.9193% ( 41) 00:21:35.209 12170.971 - 12233.387: 6.3298% ( 36) 00:21:35.209 12233.387 - 12295.802: 6.8317% ( 44) 00:21:35.209 12295.802 - 12358.217: 7.0826% ( 22) 00:21:35.209 12358.217 - 12420.632: 7.4589% ( 33) 00:21:35.209 12420.632 - 12483.048: 7.7783% ( 28) 00:21:35.209 12483.048 - 12545.463: 8.2117% ( 38) 00:21:35.209 12545.463 - 12607.878: 8.6793% ( 41) 00:21:35.209 12607.878 - 12670.293: 9.2267% ( 48) 00:21:35.209 12670.293 - 12732.709: 9.8084% ( 51) 00:21:35.209 12732.709 - 12795.124: 10.5611% ( 66) 00:21:35.209 12795.124 - 12857.539: 11.3253% ( 67) 00:21:35.209 12857.539 - 12919.954: 12.2035% ( 77) 00:21:35.209 12919.954 - 12982.370: 13.1615% ( 84) 00:21:35.209 12982.370 - 13044.785: 14.0739% ( 80) 00:21:35.209 13044.785 - 13107.200: 15.1118% ( 91) 00:21:35.209 13107.200 - 13169.615: 16.2409% ( 99) 00:21:35.209 13169.615 - 13232.030: 17.4840% ( 109) 00:21:35.209 13232.030 - 13294.446: 18.7728% ( 113) 00:21:35.209 13294.446 - 13356.861: 20.2669% ( 131) 00:21:35.209 13356.861 - 13419.276: 21.8522% ( 139) 00:21:35.209 13419.276 - 13481.691: 23.9051% ( 180) 00:21:35.209 13481.691 - 13544.107: 26.2203% ( 203) 00:21:35.209 13544.107 - 13606.522: 28.6724% ( 215) 00:21:35.209 13606.522 - 13668.937: 30.9991% ( 204) 00:21:35.209 13668.937 - 13731.352: 33.3599% ( 207) 00:21:35.209 13731.352 - 13793.768: 36.0516% ( 236) 00:21:35.209 13793.768 - 13856.183: 38.3098% ( 198) 00:21:35.209 13856.183 - 13918.598: 40.7619% ( 215) 00:21:35.209 13918.598 - 13981.013: 42.8376% ( 182) 00:21:35.209 13981.013 - 14043.429: 45.1414% ( 202) 00:21:35.209 14043.429 - 14105.844: 47.4339% ( 201) 00:21:35.209 14105.844 - 14168.259: 49.7377% ( 202) 00:21:35.209 14168.259 - 14230.674: 52.1670% ( 213) 00:21:35.209 14230.674 - 14293.090: 54.0374% ( 164) 00:21:35.209 14293.090 - 14355.505: 56.0447% ( 176) 00:21:35.209 14355.505 - 14417.920: 58.1318% ( 183) 00:21:35.209 14417.920 - 14480.335: 60.2304% ( 184) 00:21:35.209 14480.335 - 14542.750: 62.3517% ( 186) 00:21:35.209 14542.750 - 14605.166: 64.5985% ( 197) 00:21:35.209 14605.166 - 14667.581: 66.2637% ( 146) 00:21:35.209 14667.581 - 14729.996: 67.7920% ( 134) 00:21:35.209 14729.996 - 14792.411: 69.0465% ( 110) 00:21:35.209 14792.411 - 14854.827: 70.3695% ( 116) 00:21:35.209 14854.827 - 14917.242: 71.7381% ( 120) 00:21:35.209 14917.242 - 14979.657: 73.1410% ( 123) 00:21:35.209 14979.657 - 15042.072: 74.7035% ( 137) 00:21:35.209 15042.072 - 15104.488: 76.1063% ( 123) 00:21:35.209 15104.488 - 15166.903: 77.6574% ( 136) 00:21:35.209 15166.903 - 15229.318: 79.0716% ( 124) 00:21:35.209 15229.318 - 15291.733: 80.4402% ( 120) 00:21:35.209 15291.733 - 15354.149: 81.6606% ( 107) 00:21:35.209 15354.149 - 15416.564: 82.8695% ( 106) 00:21:35.209 15416.564 - 15478.979: 83.8390% ( 85) 00:21:35.209 15478.979 - 15541.394: 84.6487% ( 71) 00:21:35.209 15541.394 - 15603.810: 85.3786% ( 64) 00:21:35.209 15603.810 - 15666.225: 86.0744% ( 61) 00:21:35.209 15666.225 - 15728.640: 86.9069% ( 73) 00:21:35.209 15728.640 - 15791.055: 87.7281% ( 72) 00:21:35.209 15791.055 - 15853.470: 88.5036% ( 68) 00:21:35.209 15853.470 - 15915.886: 89.3476% ( 74) 00:21:35.209 15915.886 - 15978.301: 90.1118% ( 67) 00:21:35.209 15978.301 - 16103.131: 91.1382% ( 90) 00:21:35.209 16103.131 - 16227.962: 92.3586% ( 107) 00:21:35.209 16227.962 - 16352.792: 93.3280% ( 85) 00:21:35.209 16352.792 - 16477.623: 94.0922% ( 67) 00:21:35.209 16477.623 - 16602.453: 94.8107% ( 63) 00:21:35.209 16602.453 - 16727.284: 95.4380% ( 55) 00:21:35.209 16727.284 - 16852.114: 96.0538% ( 54) 00:21:35.209 16852.114 - 16976.945: 96.5100% ( 40) 00:21:35.209 16976.945 - 17101.775: 96.8408% ( 29) 00:21:35.209 17101.775 - 17226.606: 97.0689% ( 20) 00:21:35.209 17226.606 - 17351.436: 97.2172% ( 13) 00:21:35.209 17351.436 - 17476.267: 97.3312% ( 10) 00:21:35.209 17476.267 - 17601.097: 97.4567% ( 11) 00:21:35.209 17601.097 - 17725.928: 97.6049% ( 13) 00:21:35.209 17725.928 - 17850.758: 97.7760% ( 15) 00:21:35.209 17850.758 - 17975.589: 97.9129% ( 12) 00:21:35.209 17975.589 - 18100.419: 98.0269% ( 10) 00:21:35.209 18100.419 - 18225.250: 98.1296% ( 9) 00:21:35.209 18225.250 - 18350.080: 98.1752% ( 4) 00:21:35.209 18350.080 - 18474.910: 98.2322% ( 5) 00:21:35.209 18474.910 - 18599.741: 98.2778% ( 4) 00:21:35.209 18599.741 - 18724.571: 98.3234% ( 4) 00:21:35.209 18724.571 - 18849.402: 98.3691% ( 4) 00:21:35.209 18849.402 - 18974.232: 98.3919% ( 2) 00:21:35.209 18974.232 - 19099.063: 98.4261% ( 3) 00:21:35.209 19099.063 - 19223.893: 98.4489% ( 2) 00:21:35.209 19223.893 - 19348.724: 98.4717% ( 2) 00:21:35.209 19348.724 - 19473.554: 98.5059% ( 3) 00:21:35.209 19473.554 - 19598.385: 98.5401% ( 3) 00:21:35.209 34952.533 - 35202.194: 98.5630% ( 2) 00:21:35.209 35202.194 - 35451.855: 98.6314% ( 6) 00:21:35.209 35451.855 - 35701.516: 98.7112% ( 7) 00:21:35.209 35701.516 - 35951.177: 98.7911% ( 7) 00:21:35.209 35951.177 - 36200.838: 98.8595% ( 6) 00:21:35.209 36200.838 - 36450.499: 98.9165% ( 5) 00:21:35.209 36450.499 - 36700.160: 98.9849% ( 6) 00:21:35.209 36700.160 - 36949.821: 99.0648% ( 7) 00:21:35.209 36949.821 - 37199.482: 99.1218% ( 5) 00:21:35.209 37199.482 - 37449.143: 99.2016% ( 7) 00:21:35.209 37449.143 - 37698.804: 99.2587% ( 5) 00:21:35.209 37698.804 - 37948.465: 99.2701% ( 1) 00:21:35.209 42692.023 - 42941.684: 99.3157% ( 4) 00:21:35.209 42941.684 - 43191.345: 99.3727% ( 5) 00:21:35.209 43191.345 - 43441.006: 99.3841% ( 1) 00:21:35.209 43690.667 - 43940.328: 99.4297% ( 4) 00:21:35.209 43940.328 - 44189.989: 99.5096% ( 7) 00:21:35.209 44189.989 - 44439.650: 99.5780% ( 6) 00:21:35.209 44439.650 - 44689.310: 99.6578% ( 7) 00:21:35.209 44689.310 - 44938.971: 99.7263% ( 6) 00:21:35.209 44938.971 - 45188.632: 99.7947% ( 6) 00:21:35.209 45188.632 - 45438.293: 99.8631% ( 6) 00:21:35.209 45438.293 - 45687.954: 99.9316% ( 6) 00:21:35.209 45687.954 - 45937.615: 100.0000% ( 6) 00:21:35.209 00:21:35.209 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:21:35.209 ============================================================================== 00:21:35.209 Range in us Cumulative IO count 00:21:35.209 10985.082 - 11047.497: 0.0114% ( 1) 00:21:35.209 11047.497 - 11109.912: 0.0798% ( 6) 00:21:35.209 11109.912 - 11172.328: 0.1825% ( 9) 00:21:35.209 11172.328 - 11234.743: 0.3536% ( 15) 00:21:35.209 11234.743 - 11297.158: 0.5132% ( 14) 00:21:35.209 11297.158 - 11359.573: 0.9466% ( 38) 00:21:35.209 11359.573 - 11421.989: 1.2774% ( 29) 00:21:35.209 11421.989 - 11484.404: 1.5511% ( 24) 00:21:35.210 11484.404 - 11546.819: 1.9047% ( 31) 00:21:35.210 11546.819 - 11609.234: 2.2354% ( 29) 00:21:35.210 11609.234 - 11671.650: 2.5776% ( 30) 00:21:35.210 11671.650 - 11734.065: 2.9311% ( 31) 00:21:35.210 11734.065 - 11796.480: 3.2619% ( 29) 00:21:35.210 11796.480 - 11858.895: 3.5128% ( 22) 00:21:35.210 11858.895 - 11921.310: 3.7637% ( 22) 00:21:35.210 11921.310 - 11983.726: 4.1172% ( 31) 00:21:35.210 11983.726 - 12046.141: 4.5164% ( 35) 00:21:35.210 12046.141 - 12108.556: 4.9498% ( 38) 00:21:35.210 12108.556 - 12170.971: 5.4973% ( 48) 00:21:35.210 12170.971 - 12233.387: 6.0675% ( 50) 00:21:35.210 12233.387 - 12295.802: 6.6834% ( 54) 00:21:35.210 12295.802 - 12358.217: 7.0027% ( 28) 00:21:35.210 12358.217 - 12420.632: 7.3335% ( 29) 00:21:35.210 12420.632 - 12483.048: 7.6414% ( 27) 00:21:35.210 12483.048 - 12545.463: 8.0064% ( 32) 00:21:35.210 12545.463 - 12607.878: 8.5538% ( 48) 00:21:35.210 12607.878 - 12670.293: 9.2952% ( 65) 00:21:35.210 12670.293 - 12732.709: 9.9681% ( 59) 00:21:35.210 12732.709 - 12795.124: 10.5953% ( 55) 00:21:35.210 12795.124 - 12857.539: 11.4621% ( 76) 00:21:35.210 12857.539 - 12919.954: 12.0438% ( 51) 00:21:35.210 12919.954 - 12982.370: 12.9676% ( 81) 00:21:35.210 12982.370 - 13044.785: 14.0511% ( 95) 00:21:35.210 13044.785 - 13107.200: 15.1460% ( 96) 00:21:35.210 13107.200 - 13169.615: 16.1610% ( 89) 00:21:35.210 13169.615 - 13232.030: 17.4954% ( 117) 00:21:35.210 13232.030 - 13294.446: 19.1606% ( 146) 00:21:35.210 13294.446 - 13356.861: 20.9398% ( 156) 00:21:35.210 13356.861 - 13419.276: 22.9243% ( 174) 00:21:35.210 13419.276 - 13481.691: 25.0000% ( 182) 00:21:35.210 13481.691 - 13544.107: 27.1556% ( 189) 00:21:35.210 13544.107 - 13606.522: 29.5164% ( 207) 00:21:35.210 13606.522 - 13668.937: 31.8317% ( 203) 00:21:35.210 13668.937 - 13731.352: 34.1127% ( 200) 00:21:35.210 13731.352 - 13793.768: 36.4735% ( 207) 00:21:35.210 13793.768 - 13856.183: 38.7318% ( 198) 00:21:35.210 13856.183 - 13918.598: 41.6286% ( 254) 00:21:35.210 13918.598 - 13981.013: 44.0693% ( 214) 00:21:35.210 13981.013 - 14043.429: 46.5100% ( 214) 00:21:35.210 14043.429 - 14105.844: 48.7911% ( 200) 00:21:35.210 14105.844 - 14168.259: 51.1519% ( 207) 00:21:35.210 14168.259 - 14230.674: 53.0566% ( 167) 00:21:35.210 14230.674 - 14293.090: 54.8928% ( 161) 00:21:35.210 14293.090 - 14355.505: 56.6834% ( 157) 00:21:35.210 14355.505 - 14417.920: 58.7477% ( 181) 00:21:35.210 14417.920 - 14480.335: 60.5953% ( 162) 00:21:35.210 14480.335 - 14542.750: 62.2605% ( 146) 00:21:35.210 14542.750 - 14605.166: 63.8800% ( 142) 00:21:35.210 14605.166 - 14667.581: 65.3855% ( 132) 00:21:35.210 14667.581 - 14729.996: 66.9024% ( 133) 00:21:35.210 14729.996 - 14792.411: 68.3280% ( 125) 00:21:35.210 14792.411 - 14854.827: 69.7308% ( 123) 00:21:35.210 14854.827 - 14917.242: 70.9284% ( 105) 00:21:35.210 14917.242 - 14979.657: 72.1601% ( 108) 00:21:35.210 14979.657 - 15042.072: 73.7568% ( 140) 00:21:35.210 15042.072 - 15104.488: 75.1825% ( 125) 00:21:35.210 15104.488 - 15166.903: 76.6651% ( 130) 00:21:35.210 15166.903 - 15229.318: 78.1478% ( 130) 00:21:35.210 15229.318 - 15291.733: 79.8244% ( 147) 00:21:35.210 15291.733 - 15354.149: 81.1702% ( 118) 00:21:35.210 15354.149 - 15416.564: 82.3335% ( 102) 00:21:35.210 15416.564 - 15478.979: 83.4740% ( 100) 00:21:35.210 15478.979 - 15541.394: 84.3978% ( 81) 00:21:35.210 15541.394 - 15603.810: 85.1505% ( 66) 00:21:35.210 15603.810 - 15666.225: 86.0287% ( 77) 00:21:35.210 15666.225 - 15728.640: 86.7815% ( 66) 00:21:35.210 15728.640 - 15791.055: 87.5570% ( 68) 00:21:35.210 15791.055 - 15853.470: 88.2984% ( 65) 00:21:35.210 15853.470 - 15915.886: 88.9028% ( 53) 00:21:35.210 15915.886 - 15978.301: 89.5073% ( 53) 00:21:35.210 15978.301 - 16103.131: 90.8987% ( 122) 00:21:35.210 16103.131 - 16227.962: 92.1533% ( 110) 00:21:35.210 16227.962 - 16352.792: 93.0315% ( 77) 00:21:35.210 16352.792 - 16477.623: 93.8298% ( 70) 00:21:35.210 16477.623 - 16602.453: 94.4685% ( 56) 00:21:35.210 16602.453 - 16727.284: 95.1642% ( 61) 00:21:35.210 16727.284 - 16852.114: 95.6547% ( 43) 00:21:35.210 16852.114 - 16976.945: 96.2363% ( 51) 00:21:35.210 16976.945 - 17101.775: 96.6241% ( 34) 00:21:35.210 17101.775 - 17226.606: 96.9776% ( 31) 00:21:35.210 17226.606 - 17351.436: 97.1943% ( 19) 00:21:35.210 17351.436 - 17476.267: 97.3996% ( 18) 00:21:35.210 17476.267 - 17601.097: 97.6277% ( 20) 00:21:35.210 17601.097 - 17725.928: 97.8216% ( 17) 00:21:35.210 17725.928 - 17850.758: 98.0041% ( 16) 00:21:35.210 17850.758 - 17975.589: 98.0725% ( 6) 00:21:35.210 17975.589 - 18100.419: 98.1296% ( 5) 00:21:35.210 18100.419 - 18225.250: 98.1866% ( 5) 00:21:35.210 18225.250 - 18350.080: 98.2094% ( 2) 00:21:35.210 18350.080 - 18474.910: 98.2778% ( 6) 00:21:35.210 18474.910 - 18599.741: 98.3234% ( 4) 00:21:35.210 18599.741 - 18724.571: 98.3463% ( 2) 00:21:35.210 18724.571 - 18849.402: 98.3805% ( 3) 00:21:35.210 18849.402 - 18974.232: 98.4033% ( 2) 00:21:35.210 18974.232 - 19099.063: 98.4261% ( 2) 00:21:35.210 19099.063 - 19223.893: 98.4489% ( 2) 00:21:35.210 19223.893 - 19348.724: 98.4945% ( 4) 00:21:35.210 19348.724 - 19473.554: 98.5401% ( 4) 00:21:35.210 31956.602 - 32206.263: 98.5516% ( 1) 00:21:35.210 32206.263 - 32455.924: 98.6200% ( 6) 00:21:35.210 32455.924 - 32705.585: 98.6884% ( 6) 00:21:35.210 32705.585 - 32955.246: 98.7454% ( 5) 00:21:35.210 32955.246 - 33204.907: 98.8253% ( 7) 00:21:35.210 33204.907 - 33454.568: 98.8937% ( 6) 00:21:35.210 33454.568 - 33704.229: 98.9735% ( 7) 00:21:35.210 33704.229 - 33953.890: 99.0306% ( 5) 00:21:35.210 33953.890 - 34203.550: 99.0876% ( 5) 00:21:35.210 34203.550 - 34453.211: 99.1560% ( 6) 00:21:35.210 34453.211 - 34702.872: 99.2245% ( 6) 00:21:35.210 34702.872 - 34952.533: 99.2701% ( 4) 00:21:35.210 39696.091 - 39945.752: 99.2929% ( 2) 00:21:35.210 39945.752 - 40195.413: 99.3499% ( 5) 00:21:35.210 40195.413 - 40445.074: 99.4069% ( 5) 00:21:35.210 40445.074 - 40694.735: 99.4754% ( 6) 00:21:35.210 40694.735 - 40944.396: 99.5438% ( 6) 00:21:35.210 40944.396 - 41194.057: 99.6122% ( 6) 00:21:35.210 41194.057 - 41443.718: 99.6807% ( 6) 00:21:35.210 41443.718 - 41693.379: 99.7491% ( 6) 00:21:35.210 41693.379 - 41943.040: 99.8175% ( 6) 00:21:35.210 41943.040 - 42192.701: 99.8859% ( 6) 00:21:35.210 42192.701 - 42442.362: 99.9544% ( 6) 00:21:35.210 42442.362 - 42692.023: 100.0000% ( 4) 00:21:35.210 00:21:35.210 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:21:35.210 ============================================================================== 00:21:35.210 Range in us Cumulative IO count 00:21:35.210 10985.082 - 11047.497: 0.0228% ( 2) 00:21:35.210 11047.497 - 11109.912: 0.0342% ( 1) 00:21:35.210 11109.912 - 11172.328: 0.0570% ( 2) 00:21:35.210 11172.328 - 11234.743: 0.1141% ( 5) 00:21:35.210 11234.743 - 11297.158: 0.2509% ( 12) 00:21:35.210 11297.158 - 11359.573: 0.4220% ( 15) 00:21:35.210 11359.573 - 11421.989: 0.7185% ( 26) 00:21:35.210 11421.989 - 11484.404: 1.1747% ( 40) 00:21:35.210 11484.404 - 11546.819: 1.5625% ( 34) 00:21:35.210 11546.819 - 11609.234: 1.8134% ( 22) 00:21:35.210 11609.234 - 11671.650: 2.0985% ( 25) 00:21:35.210 11671.650 - 11734.065: 2.4293% ( 29) 00:21:35.210 11734.065 - 11796.480: 2.7372% ( 27) 00:21:35.210 11796.480 - 11858.895: 2.9539% ( 19) 00:21:35.210 11858.895 - 11921.310: 3.2276% ( 24) 00:21:35.210 11921.310 - 11983.726: 3.7865% ( 49) 00:21:35.210 11983.726 - 12046.141: 4.4252% ( 56) 00:21:35.210 12046.141 - 12108.556: 4.9156% ( 43) 00:21:35.210 12108.556 - 12170.971: 5.4516% ( 47) 00:21:35.210 12170.971 - 12233.387: 5.9991% ( 48) 00:21:35.210 12233.387 - 12295.802: 6.5922% ( 52) 00:21:35.210 12295.802 - 12358.217: 7.2879% ( 61) 00:21:35.210 12358.217 - 12420.632: 7.7213% ( 38) 00:21:35.210 12420.632 - 12483.048: 8.0634% ( 30) 00:21:35.210 12483.048 - 12545.463: 8.4056% ( 30) 00:21:35.210 12545.463 - 12607.878: 8.7591% ( 31) 00:21:35.210 12607.878 - 12670.293: 9.1013% ( 30) 00:21:35.210 12670.293 - 12732.709: 9.8084% ( 62) 00:21:35.210 12732.709 - 12795.124: 10.4585% ( 57) 00:21:35.210 12795.124 - 12857.539: 11.2112% ( 66) 00:21:35.210 12857.539 - 12919.954: 12.0210% ( 71) 00:21:35.210 12919.954 - 12982.370: 12.9448% ( 81) 00:21:35.210 12982.370 - 13044.785: 13.9827% ( 91) 00:21:35.210 13044.785 - 13107.200: 14.9407% ( 84) 00:21:35.210 13107.200 - 13169.615: 15.8873% ( 83) 00:21:35.210 13169.615 - 13232.030: 17.1419% ( 110) 00:21:35.210 13232.030 - 13294.446: 18.7386% ( 140) 00:21:35.210 13294.446 - 13356.861: 20.6090% ( 164) 00:21:35.210 13356.861 - 13419.276: 22.6962% ( 183) 00:21:35.210 13419.276 - 13481.691: 24.7947% ( 184) 00:21:35.210 13481.691 - 13544.107: 27.0985% ( 202) 00:21:35.210 13544.107 - 13606.522: 29.2313% ( 187) 00:21:35.210 13606.522 - 13668.937: 31.5465% ( 203) 00:21:35.210 13668.937 - 13731.352: 34.2153% ( 234) 00:21:35.210 13731.352 - 13793.768: 36.8271% ( 229) 00:21:35.210 13793.768 - 13856.183: 39.6898% ( 251) 00:21:35.210 13856.183 - 13918.598: 42.1077% ( 212) 00:21:35.210 13918.598 - 13981.013: 44.5484% ( 214) 00:21:35.210 13981.013 - 14043.429: 46.6697% ( 186) 00:21:35.210 14043.429 - 14105.844: 49.1218% ( 215) 00:21:35.210 14105.844 - 14168.259: 51.0835% ( 172) 00:21:35.210 14168.259 - 14230.674: 53.1706% ( 183) 00:21:35.210 14230.674 - 14293.090: 54.9498% ( 156) 00:21:35.210 14293.090 - 14355.505: 56.5807% ( 143) 00:21:35.210 14355.505 - 14417.920: 58.2573% ( 147) 00:21:35.210 14417.920 - 14480.335: 59.8882% ( 143) 00:21:35.210 14480.335 - 14542.750: 61.7130% ( 160) 00:21:35.210 14542.750 - 14605.166: 63.4466% ( 152) 00:21:35.210 14605.166 - 14667.581: 65.0547% ( 141) 00:21:35.210 14667.581 - 14729.996: 66.5374% ( 130) 00:21:35.210 14729.996 - 14792.411: 67.8376% ( 114) 00:21:35.211 14792.411 - 14854.827: 69.0922% ( 110) 00:21:35.211 14854.827 - 14917.242: 70.6432% ( 136) 00:21:35.211 14917.242 - 14979.657: 72.1031% ( 128) 00:21:35.211 14979.657 - 15042.072: 73.4717% ( 120) 00:21:35.211 15042.072 - 15104.488: 74.9088% ( 126) 00:21:35.211 15104.488 - 15166.903: 76.3458% ( 126) 00:21:35.211 15166.903 - 15229.318: 77.8171% ( 129) 00:21:35.211 15229.318 - 15291.733: 79.2655% ( 127) 00:21:35.211 15291.733 - 15354.149: 80.7026% ( 126) 00:21:35.211 15354.149 - 15416.564: 82.0712% ( 120) 00:21:35.211 15416.564 - 15478.979: 83.2117% ( 100) 00:21:35.211 15478.979 - 15541.394: 83.9986% ( 69) 00:21:35.211 15541.394 - 15603.810: 84.9110% ( 80) 00:21:35.211 15603.810 - 15666.225: 85.7322% ( 72) 00:21:35.211 15666.225 - 15728.640: 86.6218% ( 78) 00:21:35.211 15728.640 - 15791.055: 87.3061% ( 60) 00:21:35.211 15791.055 - 15853.470: 88.0474% ( 65) 00:21:35.211 15853.470 - 15915.886: 88.8686% ( 72) 00:21:35.211 15915.886 - 15978.301: 89.5415% ( 59) 00:21:35.211 15978.301 - 16103.131: 90.8759% ( 117) 00:21:35.211 16103.131 - 16227.962: 91.8682% ( 87) 00:21:35.211 16227.962 - 16352.792: 92.8946% ( 90) 00:21:35.211 16352.792 - 16477.623: 93.7272% ( 73) 00:21:35.211 16477.623 - 16602.453: 94.4913% ( 67) 00:21:35.211 16602.453 - 16727.284: 95.1528% ( 58) 00:21:35.211 16727.284 - 16852.114: 95.7915% ( 56) 00:21:35.211 16852.114 - 16976.945: 96.2249% ( 38) 00:21:35.211 16976.945 - 17101.775: 96.5785% ( 31) 00:21:35.211 17101.775 - 17226.606: 96.8864% ( 27) 00:21:35.211 17226.606 - 17351.436: 97.1601% ( 24) 00:21:35.211 17351.436 - 17476.267: 97.3882% ( 20) 00:21:35.211 17476.267 - 17601.097: 97.5137% ( 11) 00:21:35.211 17601.097 - 17725.928: 97.6163% ( 9) 00:21:35.211 17725.928 - 17850.758: 97.6962% ( 7) 00:21:35.211 17850.758 - 17975.589: 97.8102% ( 10) 00:21:35.211 17975.589 - 18100.419: 97.9357% ( 11) 00:21:35.211 18100.419 - 18225.250: 97.9927% ( 5) 00:21:35.211 18225.250 - 18350.080: 98.0611% ( 6) 00:21:35.211 18350.080 - 18474.910: 98.1296% ( 6) 00:21:35.211 18474.910 - 18599.741: 98.1752% ( 4) 00:21:35.211 18599.741 - 18724.571: 98.2550% ( 7) 00:21:35.211 18724.571 - 18849.402: 98.2892% ( 3) 00:21:35.211 18849.402 - 18974.232: 98.3234% ( 3) 00:21:35.211 18974.232 - 19099.063: 98.3463% ( 2) 00:21:35.211 19099.063 - 19223.893: 98.3805% ( 3) 00:21:35.211 19223.893 - 19348.724: 98.4033% ( 2) 00:21:35.211 19348.724 - 19473.554: 98.4261% ( 2) 00:21:35.211 19473.554 - 19598.385: 98.4489% ( 2) 00:21:35.211 19598.385 - 19723.215: 98.4831% ( 3) 00:21:35.211 19723.215 - 19848.046: 98.5173% ( 3) 00:21:35.211 19848.046 - 19972.876: 98.5401% ( 2) 00:21:35.211 28711.010 - 28835.840: 98.5630% ( 2) 00:21:35.211 28835.840 - 28960.670: 98.5972% ( 3) 00:21:35.211 28960.670 - 29085.501: 98.6314% ( 3) 00:21:35.211 29085.501 - 29210.331: 98.6656% ( 3) 00:21:35.211 29210.331 - 29335.162: 98.6998% ( 3) 00:21:35.211 29335.162 - 29459.992: 98.7340% ( 3) 00:21:35.211 29459.992 - 29584.823: 98.7682% ( 3) 00:21:35.211 29584.823 - 29709.653: 98.8025% ( 3) 00:21:35.211 29709.653 - 29834.484: 98.8367% ( 3) 00:21:35.211 29834.484 - 29959.314: 98.8823% ( 4) 00:21:35.211 29959.314 - 30084.145: 98.9051% ( 2) 00:21:35.211 30084.145 - 30208.975: 98.9393% ( 3) 00:21:35.211 30208.975 - 30333.806: 98.9849% ( 4) 00:21:35.211 30333.806 - 30458.636: 99.0192% ( 3) 00:21:35.211 30458.636 - 30583.467: 99.0534% ( 3) 00:21:35.211 30583.467 - 30708.297: 99.0762% ( 2) 00:21:35.211 30708.297 - 30833.128: 99.1104% ( 3) 00:21:35.211 30833.128 - 30957.958: 99.1560% ( 4) 00:21:35.211 30957.958 - 31082.789: 99.1788% ( 2) 00:21:35.211 31082.789 - 31207.619: 99.2130% ( 3) 00:21:35.211 31207.619 - 31332.450: 99.2473% ( 3) 00:21:35.211 31332.450 - 31457.280: 99.2701% ( 2) 00:21:35.211 36200.838 - 36450.499: 99.2929% ( 2) 00:21:35.211 36450.499 - 36700.160: 99.3613% ( 6) 00:21:35.211 36700.160 - 36949.821: 99.4297% ( 6) 00:21:35.211 36949.821 - 37199.482: 99.4982% ( 6) 00:21:35.211 37199.482 - 37449.143: 99.5666% ( 6) 00:21:35.211 37449.143 - 37698.804: 99.6350% ( 6) 00:21:35.211 37698.804 - 37948.465: 99.7149% ( 7) 00:21:35.211 37948.465 - 38198.126: 99.7833% ( 6) 00:21:35.211 38198.126 - 38447.787: 99.8517% ( 6) 00:21:35.211 38447.787 - 38697.448: 99.9202% ( 6) 00:21:35.211 38697.448 - 38947.109: 99.9886% ( 6) 00:21:35.211 38947.109 - 39196.770: 100.0000% ( 1) 00:21:35.211 00:21:35.211 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:21:35.211 ============================================================================== 00:21:35.211 Range in us Cumulative IO count 00:21:35.211 10548.175 - 10610.590: 0.0113% ( 1) 00:21:35.211 10673.006 - 10735.421: 0.0566% ( 4) 00:21:35.211 10735.421 - 10797.836: 0.1359% ( 7) 00:21:35.211 10797.836 - 10860.251: 0.2151% ( 7) 00:21:35.211 10860.251 - 10922.667: 0.2717% ( 5) 00:21:35.211 10922.667 - 10985.082: 0.4529% ( 16) 00:21:35.211 10985.082 - 11047.497: 0.5322% ( 7) 00:21:35.211 11047.497 - 11109.912: 0.5661% ( 3) 00:21:35.211 11109.912 - 11172.328: 0.6454% ( 7) 00:21:35.211 11172.328 - 11234.743: 0.7699% ( 11) 00:21:35.211 11234.743 - 11297.158: 0.9737% ( 18) 00:21:35.211 11297.158 - 11359.573: 1.3134% ( 30) 00:21:35.211 11359.573 - 11421.989: 1.6870% ( 33) 00:21:35.211 11421.989 - 11484.404: 2.0380% ( 31) 00:21:35.211 11484.404 - 11546.819: 2.3211% ( 25) 00:21:35.211 11546.819 - 11609.234: 2.6947% ( 33) 00:21:35.211 11609.234 - 11671.650: 2.9438% ( 22) 00:21:35.211 11671.650 - 11734.065: 3.2382% ( 26) 00:21:35.211 11734.065 - 11796.480: 3.7364% ( 44) 00:21:35.211 11796.480 - 11858.895: 4.2686% ( 47) 00:21:35.211 11858.895 - 11921.310: 4.6309% ( 32) 00:21:35.211 11921.310 - 11983.726: 4.9706% ( 30) 00:21:35.211 11983.726 - 12046.141: 5.3102% ( 30) 00:21:35.211 12046.141 - 12108.556: 5.6952% ( 34) 00:21:35.211 12108.556 - 12170.971: 6.1481% ( 40) 00:21:35.211 12170.971 - 12233.387: 6.6010% ( 40) 00:21:35.211 12233.387 - 12295.802: 7.0199% ( 37) 00:21:35.211 12295.802 - 12358.217: 7.3709% ( 31) 00:21:35.211 12358.217 - 12420.632: 7.7559% ( 34) 00:21:35.211 12420.632 - 12483.048: 8.1522% ( 35) 00:21:35.211 12483.048 - 12545.463: 8.5485% ( 35) 00:21:35.211 12545.463 - 12607.878: 8.9787% ( 38) 00:21:35.211 12607.878 - 12670.293: 9.4882% ( 45) 00:21:35.211 12670.293 - 12732.709: 10.0091% ( 46) 00:21:35.211 12732.709 - 12795.124: 10.5639% ( 49) 00:21:35.211 12795.124 - 12857.539: 11.0054% ( 39) 00:21:35.211 12857.539 - 12919.954: 11.6055% ( 53) 00:21:35.211 12919.954 - 12982.370: 12.6472% ( 92) 00:21:35.211 12982.370 - 13044.785: 13.8134% ( 103) 00:21:35.211 13044.785 - 13107.200: 15.0362% ( 108) 00:21:35.211 13107.200 - 13169.615: 16.1119% ( 95) 00:21:35.211 13169.615 - 13232.030: 17.4139% ( 115) 00:21:35.211 13232.030 - 13294.446: 19.1576% ( 154) 00:21:35.211 13294.446 - 13356.861: 21.1051% ( 172) 00:21:35.211 13356.861 - 13419.276: 22.9620% ( 164) 00:21:35.211 13419.276 - 13481.691: 24.9887% ( 179) 00:21:35.211 13481.691 - 13544.107: 27.0607% ( 183) 00:21:35.211 13544.107 - 13606.522: 29.5856% ( 223) 00:21:35.211 13606.522 - 13668.937: 31.8841% ( 203) 00:21:35.211 13668.937 - 13731.352: 34.0240% ( 189) 00:21:35.211 13731.352 - 13793.768: 36.2885% ( 200) 00:21:35.211 13793.768 - 13856.183: 38.7115% ( 214) 00:21:35.211 13856.183 - 13918.598: 41.2138% ( 221) 00:21:35.211 13918.598 - 13981.013: 43.6255% ( 213) 00:21:35.211 13981.013 - 14043.429: 46.3768% ( 243) 00:21:35.211 14043.429 - 14105.844: 48.6300% ( 199) 00:21:35.211 14105.844 - 14168.259: 50.6114% ( 175) 00:21:35.211 14168.259 - 14230.674: 52.7853% ( 192) 00:21:35.211 14230.674 - 14293.090: 54.8007% ( 178) 00:21:35.211 14293.090 - 14355.505: 56.7708% ( 174) 00:21:35.211 14355.505 - 14417.920: 58.7183% ( 172) 00:21:35.211 14417.920 - 14480.335: 60.4846% ( 156) 00:21:35.211 14480.335 - 14542.750: 62.1150% ( 144) 00:21:35.211 14542.750 - 14605.166: 63.6322% ( 134) 00:21:35.211 14605.166 - 14667.581: 65.1042% ( 130) 00:21:35.211 14667.581 - 14729.996: 66.7346% ( 144) 00:21:35.211 14729.996 - 14792.411: 68.3424% ( 142) 00:21:35.211 14792.411 - 14854.827: 69.8030% ( 129) 00:21:35.211 14854.827 - 14917.242: 71.1730% ( 121) 00:21:35.211 14917.242 - 14979.657: 72.6902% ( 134) 00:21:35.211 14979.657 - 15042.072: 74.1055% ( 125) 00:21:35.211 15042.072 - 15104.488: 75.6114% ( 133) 00:21:35.211 15104.488 - 15166.903: 76.9475% ( 118) 00:21:35.211 15166.903 - 15229.318: 78.1929% ( 110) 00:21:35.211 15229.318 - 15291.733: 79.5177% ( 117) 00:21:35.211 15291.733 - 15354.149: 80.7971% ( 113) 00:21:35.211 15354.149 - 15416.564: 81.9520% ( 102) 00:21:35.211 15416.564 - 15478.979: 82.9031% ( 84) 00:21:35.211 15478.979 - 15541.394: 83.9221% ( 90) 00:21:35.211 15541.394 - 15603.810: 84.9411% ( 90) 00:21:35.211 15603.810 - 15666.225: 85.8469% ( 80) 00:21:35.211 15666.225 - 15728.640: 86.9339% ( 96) 00:21:35.211 15728.640 - 15791.055: 88.0322% ( 97) 00:21:35.211 15791.055 - 15853.470: 88.9493% ( 81) 00:21:35.211 15853.470 - 15915.886: 89.7305% ( 69) 00:21:35.211 15915.886 - 15978.301: 90.4325% ( 62) 00:21:35.211 15978.301 - 16103.131: 91.6214% ( 105) 00:21:35.211 16103.131 - 16227.962: 92.5725% ( 84) 00:21:35.211 16227.962 - 16352.792: 93.5009% ( 82) 00:21:35.211 16352.792 - 16477.623: 94.3727% ( 77) 00:21:35.211 16477.623 - 16602.453: 95.1313% ( 67) 00:21:35.211 16602.453 - 16727.284: 95.6069% ( 42) 00:21:35.211 16727.284 - 16852.114: 95.9918% ( 34) 00:21:35.211 16852.114 - 16976.945: 96.1957% ( 18) 00:21:35.211 16976.945 - 17101.775: 96.4108% ( 19) 00:21:35.211 17101.775 - 17226.606: 96.5580% ( 13) 00:21:35.211 17226.606 - 17351.436: 96.6599% ( 9) 00:21:35.211 17351.436 - 17476.267: 96.8863% ( 20) 00:21:35.211 17476.267 - 17601.097: 97.1354% ( 22) 00:21:35.212 17601.097 - 17725.928: 97.3392% ( 18) 00:21:35.212 17725.928 - 17850.758: 97.4977% ( 14) 00:21:35.212 17850.758 - 17975.589: 97.5996% ( 9) 00:21:35.212 17975.589 - 18100.419: 97.6676% ( 6) 00:21:35.212 18100.419 - 18225.250: 97.7355% ( 6) 00:21:35.212 18225.250 - 18350.080: 97.8148% ( 7) 00:21:35.212 18350.080 - 18474.910: 97.9167% ( 9) 00:21:35.212 18474.910 - 18599.741: 97.9733% ( 5) 00:21:35.212 18599.741 - 18724.571: 98.0299% ( 5) 00:21:35.212 18724.571 - 18849.402: 98.0865% ( 5) 00:21:35.212 18849.402 - 18974.232: 98.1318% ( 4) 00:21:35.212 18974.232 - 19099.063: 98.1997% ( 6) 00:21:35.212 19099.063 - 19223.893: 98.2450% ( 4) 00:21:35.212 19223.893 - 19348.724: 98.2903% ( 4) 00:21:35.212 19348.724 - 19473.554: 98.3130% ( 2) 00:21:35.212 19473.554 - 19598.385: 98.3356% ( 2) 00:21:35.212 19598.385 - 19723.215: 98.3696% ( 3) 00:21:35.212 19723.215 - 19848.046: 98.3922% ( 2) 00:21:35.212 19848.046 - 19972.876: 98.4262% ( 3) 00:21:35.212 19972.876 - 20097.707: 98.4601% ( 3) 00:21:35.212 20097.707 - 20222.537: 98.5394% ( 7) 00:21:35.212 20222.537 - 20347.368: 98.6073% ( 6) 00:21:35.212 20347.368 - 20472.198: 98.6526% ( 4) 00:21:35.212 20472.198 - 20597.029: 98.6866% ( 3) 00:21:35.212 20597.029 - 20721.859: 98.7092% ( 2) 00:21:35.212 20721.859 - 20846.690: 98.7545% ( 4) 00:21:35.212 20846.690 - 20971.520: 98.7885% ( 3) 00:21:35.212 20971.520 - 21096.350: 98.8225% ( 3) 00:21:35.212 21096.350 - 21221.181: 98.8564% ( 3) 00:21:35.212 21221.181 - 21346.011: 98.9017% ( 4) 00:21:35.212 21346.011 - 21470.842: 98.9357% ( 3) 00:21:35.212 21470.842 - 21595.672: 98.9697% ( 3) 00:21:35.212 21595.672 - 21720.503: 99.0036% ( 3) 00:21:35.212 21720.503 - 21845.333: 99.0376% ( 3) 00:21:35.212 21845.333 - 21970.164: 99.0829% ( 4) 00:21:35.212 21970.164 - 22094.994: 99.1168% ( 3) 00:21:35.212 22094.994 - 22219.825: 99.1508% ( 3) 00:21:35.212 22219.825 - 22344.655: 99.1848% ( 3) 00:21:35.212 22344.655 - 22469.486: 99.2188% ( 3) 00:21:35.212 22469.486 - 22594.316: 99.2527% ( 3) 00:21:35.212 22594.316 - 22719.147: 99.2754% ( 2) 00:21:35.212 27462.705 - 27587.535: 99.2867% ( 1) 00:21:35.212 27587.535 - 27712.366: 99.3207% ( 3) 00:21:35.212 27712.366 - 27837.196: 99.3659% ( 4) 00:21:35.212 27837.196 - 27962.027: 99.3886% ( 2) 00:21:35.212 27962.027 - 28086.857: 99.4226% ( 3) 00:21:35.212 28086.857 - 28211.688: 99.4565% ( 3) 00:21:35.212 28211.688 - 28336.518: 99.5018% ( 4) 00:21:35.212 28336.518 - 28461.349: 99.5245% ( 2) 00:21:35.212 28461.349 - 28586.179: 99.5584% ( 3) 00:21:35.212 28586.179 - 28711.010: 99.6037% ( 4) 00:21:35.212 28711.010 - 28835.840: 99.6377% ( 3) 00:21:35.212 28835.840 - 28960.670: 99.6716% ( 3) 00:21:35.212 28960.670 - 29085.501: 99.7056% ( 3) 00:21:35.212 29085.501 - 29210.331: 99.7396% ( 3) 00:21:35.212 29210.331 - 29335.162: 99.7736% ( 3) 00:21:35.212 29335.162 - 29459.992: 99.8075% ( 3) 00:21:35.212 29459.992 - 29584.823: 99.8415% ( 3) 00:21:35.212 29584.823 - 29709.653: 99.8755% ( 3) 00:21:35.212 29709.653 - 29834.484: 99.9094% ( 3) 00:21:35.212 29834.484 - 29959.314: 99.9434% ( 3) 00:21:35.212 29959.314 - 30084.145: 99.9887% ( 4) 00:21:35.212 30084.145 - 30208.975: 100.0000% ( 1) 00:21:35.212 00:21:35.212 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:21:35.212 ============================================================================== 00:21:35.212 Range in us Cumulative IO count 00:21:35.212 10797.836 - 10860.251: 0.0679% ( 6) 00:21:35.212 10860.251 - 10922.667: 0.1585% ( 8) 00:21:35.212 10922.667 - 10985.082: 0.2264% ( 6) 00:21:35.212 10985.082 - 11047.497: 0.3963% ( 15) 00:21:35.212 11047.497 - 11109.912: 0.6227% ( 20) 00:21:35.212 11109.912 - 11172.328: 0.7473% ( 11) 00:21:35.212 11172.328 - 11234.743: 0.8605% ( 10) 00:21:35.212 11234.743 - 11297.158: 1.0190% ( 14) 00:21:35.212 11297.158 - 11359.573: 1.2908% ( 24) 00:21:35.212 11359.573 - 11421.989: 1.5172% ( 20) 00:21:35.212 11421.989 - 11484.404: 1.7323% ( 19) 00:21:35.212 11484.404 - 11546.819: 1.9701% ( 21) 00:21:35.212 11546.819 - 11609.234: 2.4457% ( 42) 00:21:35.212 11609.234 - 11671.650: 2.8080% ( 32) 00:21:35.212 11671.650 - 11734.065: 3.1590% ( 31) 00:21:35.212 11734.065 - 11796.480: 3.5892% ( 38) 00:21:35.212 11796.480 - 11858.895: 3.8949% ( 27) 00:21:35.212 11858.895 - 11921.310: 4.4497% ( 49) 00:21:35.212 11921.310 - 11983.726: 4.9932% ( 48) 00:21:35.212 11983.726 - 12046.141: 5.3895% ( 35) 00:21:35.212 12046.141 - 12108.556: 5.6273% ( 21) 00:21:35.212 12108.556 - 12170.971: 5.8424% ( 19) 00:21:35.212 12170.971 - 12233.387: 6.1481% ( 27) 00:21:35.212 12233.387 - 12295.802: 6.6236% ( 42) 00:21:35.212 12295.802 - 12358.217: 7.0426% ( 37) 00:21:35.212 12358.217 - 12420.632: 7.4502% ( 36) 00:21:35.212 12420.632 - 12483.048: 7.7785% ( 29) 00:21:35.212 12483.048 - 12545.463: 8.0503% ( 24) 00:21:35.212 12545.463 - 12607.878: 8.3220% ( 24) 00:21:35.212 12607.878 - 12670.293: 8.6730% ( 31) 00:21:35.212 12670.293 - 12732.709: 9.0353% ( 32) 00:21:35.212 12732.709 - 12795.124: 9.6014% ( 50) 00:21:35.212 12795.124 - 12857.539: 10.4506% ( 75) 00:21:35.212 12857.539 - 12919.954: 11.5263% ( 95) 00:21:35.212 12919.954 - 12982.370: 12.7038% ( 104) 00:21:35.212 12982.370 - 13044.785: 13.8700% ( 103) 00:21:35.212 13044.785 - 13107.200: 15.0589% ( 105) 00:21:35.212 13107.200 - 13169.615: 16.2251% ( 103) 00:21:35.212 13169.615 - 13232.030: 17.5838% ( 120) 00:21:35.212 13232.030 - 13294.446: 19.1463% ( 138) 00:21:35.212 13294.446 - 13356.861: 20.8786% ( 153) 00:21:35.212 13356.861 - 13419.276: 22.6223% ( 154) 00:21:35.212 13419.276 - 13481.691: 24.5811% ( 173) 00:21:35.212 13481.691 - 13544.107: 26.6531% ( 183) 00:21:35.212 13544.107 - 13606.522: 28.9855% ( 206) 00:21:35.212 13606.522 - 13668.937: 31.3179% ( 206) 00:21:35.212 13668.937 - 13731.352: 33.6957% ( 210) 00:21:35.212 13731.352 - 13793.768: 36.3904% ( 238) 00:21:35.212 13793.768 - 13856.183: 38.8134% ( 214) 00:21:35.212 13856.183 - 13918.598: 41.0326% ( 196) 00:21:35.212 13918.598 - 13981.013: 43.4330% ( 212) 00:21:35.212 13981.013 - 14043.429: 45.5842% ( 190) 00:21:35.212 14043.429 - 14105.844: 47.4751% ( 167) 00:21:35.212 14105.844 - 14168.259: 49.3320% ( 164) 00:21:35.212 14168.259 - 14230.674: 51.4153% ( 184) 00:21:35.212 14230.674 - 14293.090: 53.1590% ( 154) 00:21:35.212 14293.090 - 14355.505: 55.1857% ( 179) 00:21:35.212 14355.505 - 14417.920: 57.3822% ( 194) 00:21:35.212 14417.920 - 14480.335: 59.6920% ( 204) 00:21:35.212 14480.335 - 14542.750: 61.5716% ( 166) 00:21:35.212 14542.750 - 14605.166: 63.2473% ( 148) 00:21:35.212 14605.166 - 14667.581: 64.8890% ( 145) 00:21:35.212 14667.581 - 14729.996: 66.5195% ( 144) 00:21:35.212 14729.996 - 14792.411: 68.0933% ( 139) 00:21:35.212 14792.411 - 14854.827: 69.6784% ( 140) 00:21:35.212 14854.827 - 14917.242: 71.3655% ( 149) 00:21:35.212 14917.242 - 14979.657: 72.9167% ( 137) 00:21:35.212 14979.657 - 15042.072: 74.3659% ( 128) 00:21:35.212 15042.072 - 15104.488: 75.7246% ( 120) 00:21:35.212 15104.488 - 15166.903: 77.1513% ( 126) 00:21:35.212 15166.903 - 15229.318: 78.7251% ( 139) 00:21:35.212 15229.318 - 15291.733: 80.1970% ( 130) 00:21:35.212 15291.733 - 15354.149: 81.4198% ( 108) 00:21:35.212 15354.149 - 15416.564: 82.6880% ( 112) 00:21:35.212 15416.564 - 15478.979: 83.9447% ( 111) 00:21:35.212 15478.979 - 15541.394: 84.9072% ( 85) 00:21:35.212 15541.394 - 15603.810: 85.9262% ( 90) 00:21:35.212 15603.810 - 15666.225: 86.8999% ( 86) 00:21:35.212 15666.225 - 15728.640: 87.6585% ( 67) 00:21:35.212 15728.640 - 15791.055: 88.3379% ( 60) 00:21:35.212 15791.055 - 15853.470: 89.1644% ( 73) 00:21:35.212 15853.470 - 15915.886: 89.9457% ( 69) 00:21:35.213 15915.886 - 15978.301: 90.7269% ( 69) 00:21:35.213 15978.301 - 16103.131: 91.7799% ( 93) 00:21:35.213 16103.131 - 16227.962: 92.8216% ( 92) 00:21:35.213 16227.962 - 16352.792: 93.6821% ( 76) 00:21:35.213 16352.792 - 16477.623: 94.5312% ( 75) 00:21:35.213 16477.623 - 16602.453: 95.1200% ( 52) 00:21:35.213 16602.453 - 16727.284: 95.5276% ( 36) 00:21:35.213 16727.284 - 16852.114: 95.8560% ( 29) 00:21:35.213 16852.114 - 16976.945: 96.1957% ( 30) 00:21:35.213 16976.945 - 17101.775: 96.5014% ( 27) 00:21:35.213 17101.775 - 17226.606: 96.7618% ( 23) 00:21:35.213 17226.606 - 17351.436: 96.8863% ( 11) 00:21:35.213 17351.436 - 17476.267: 97.0901% ( 18) 00:21:35.213 17476.267 - 17601.097: 97.2713% ( 16) 00:21:35.213 17601.097 - 17725.928: 97.4524% ( 16) 00:21:35.213 17725.928 - 17850.758: 97.5996% ( 13) 00:21:35.213 17850.758 - 17975.589: 97.7468% ( 13) 00:21:35.213 17975.589 - 18100.419: 97.8827% ( 12) 00:21:35.213 18100.419 - 18225.250: 98.0299% ( 13) 00:21:35.213 18225.250 - 18350.080: 98.1658% ( 12) 00:21:35.213 18350.080 - 18474.910: 98.2903% ( 11) 00:21:35.213 18474.910 - 18599.741: 98.4035% ( 10) 00:21:35.213 18599.741 - 18724.571: 98.4601% ( 5) 00:21:35.213 18724.571 - 18849.402: 98.4941% ( 3) 00:21:35.213 18849.402 - 18974.232: 98.5734% ( 7) 00:21:35.213 18974.232 - 19099.063: 98.6526% ( 7) 00:21:35.213 19099.063 - 19223.893: 98.7092% ( 5) 00:21:35.213 19223.893 - 19348.724: 98.7659% ( 5) 00:21:35.213 19348.724 - 19473.554: 98.8111% ( 4) 00:21:35.213 19473.554 - 19598.385: 98.8564% ( 4) 00:21:35.213 19598.385 - 19723.215: 98.9017% ( 4) 00:21:35.213 19723.215 - 19848.046: 98.9583% ( 5) 00:21:35.213 19848.046 - 19972.876: 99.0036% ( 4) 00:21:35.213 19972.876 - 20097.707: 99.0602% ( 5) 00:21:35.213 20097.707 - 20222.537: 99.0829% ( 2) 00:21:35.213 20222.537 - 20347.368: 99.1168% ( 3) 00:21:35.213 20347.368 - 20472.198: 99.1395% ( 2) 00:21:35.213 20472.198 - 20597.029: 99.1735% ( 3) 00:21:35.213 20597.029 - 20721.859: 99.1961% ( 2) 00:21:35.213 20721.859 - 20846.690: 99.2414% ( 4) 00:21:35.213 20846.690 - 20971.520: 99.2754% ( 3) 00:21:35.213 24092.282 - 24217.112: 99.3093% ( 3) 00:21:35.213 24217.112 - 24341.943: 99.3433% ( 3) 00:21:35.213 24341.943 - 24466.773: 99.3773% ( 3) 00:21:35.213 24466.773 - 24591.604: 99.4112% ( 3) 00:21:35.213 24591.604 - 24716.434: 99.4452% ( 3) 00:21:35.213 24716.434 - 24841.265: 99.4792% ( 3) 00:21:35.213 24841.265 - 24966.095: 99.5245% ( 4) 00:21:35.213 24966.095 - 25090.926: 99.5584% ( 3) 00:21:35.213 25090.926 - 25215.756: 99.5924% ( 3) 00:21:35.213 25215.756 - 25340.587: 99.6377% ( 4) 00:21:35.213 25340.587 - 25465.417: 99.6603% ( 2) 00:21:35.213 25465.417 - 25590.248: 99.6943% ( 3) 00:21:35.213 25590.248 - 25715.078: 99.7283% ( 3) 00:21:35.213 25715.078 - 25839.909: 99.7736% ( 4) 00:21:35.213 25839.909 - 25964.739: 99.8075% ( 3) 00:21:35.213 25964.739 - 26089.570: 99.8415% ( 3) 00:21:35.213 26089.570 - 26214.400: 99.8755% ( 3) 00:21:35.213 26214.400 - 26339.230: 99.9094% ( 3) 00:21:35.213 26339.230 - 26464.061: 99.9434% ( 3) 00:21:35.213 26464.061 - 26588.891: 99.9774% ( 3) 00:21:35.213 26588.891 - 26713.722: 100.0000% ( 2) 00:21:35.213 00:21:35.213 ************************************ 00:21:35.213 END TEST nvme_perf 00:21:35.213 ************************************ 00:21:35.213 14:40:43 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:21:35.213 00:21:35.213 real 0m2.823s 00:21:35.213 user 0m2.364s 00:21:35.213 sys 0m0.353s 00:21:35.213 14:40:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:35.213 14:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:35.213 14:40:43 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:21:35.213 14:40:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:35.213 14:40:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:35.213 14:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:35.471 ************************************ 00:21:35.471 START TEST nvme_hello_world 00:21:35.471 ************************************ 00:21:35.472 14:40:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:21:35.729 Initializing NVMe Controllers 00:21:35.729 Attached to 0000:00:10.0 00:21:35.729 Namespace ID: 1 size: 6GB 00:21:35.729 Attached to 0000:00:11.0 00:21:35.729 Namespace ID: 1 size: 5GB 00:21:35.729 Attached to 0000:00:13.0 00:21:35.729 Namespace ID: 1 size: 1GB 00:21:35.729 Attached to 0000:00:12.0 00:21:35.729 Namespace ID: 1 size: 4GB 00:21:35.729 Namespace ID: 2 size: 4GB 00:21:35.729 Namespace ID: 3 size: 4GB 00:21:35.729 Initialization complete. 00:21:35.729 INFO: using host memory buffer for IO 00:21:35.729 Hello world! 00:21:35.729 INFO: using host memory buffer for IO 00:21:35.729 Hello world! 00:21:35.729 INFO: using host memory buffer for IO 00:21:35.729 Hello world! 00:21:35.729 INFO: using host memory buffer for IO 00:21:35.729 Hello world! 00:21:35.729 INFO: using host memory buffer for IO 00:21:35.729 Hello world! 00:21:35.729 INFO: using host memory buffer for IO 00:21:35.729 Hello world! 00:21:35.729 ************************************ 00:21:35.729 END TEST nvme_hello_world 00:21:35.729 ************************************ 00:21:35.729 00:21:35.729 real 0m0.388s 00:21:35.729 user 0m0.156s 00:21:35.729 sys 0m0.193s 00:21:35.729 14:40:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:35.729 14:40:44 -- common/autotest_common.sh@10 -- # set +x 00:21:35.729 14:40:44 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:21:35.729 14:40:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:35.729 14:40:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:35.729 14:40:44 -- common/autotest_common.sh@10 -- # set +x 00:21:35.987 ************************************ 00:21:35.987 START TEST nvme_sgl 00:21:35.987 ************************************ 00:21:35.987 14:40:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:21:36.285 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:21:36.285 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:21:36.285 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:21:36.285 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:21:36.285 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:21:36.285 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:21:36.285 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:21:36.285 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:21:36.285 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:21:36.285 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:21:36.285 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:21:36.285 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:21:36.285 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:21:36.285 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:21:36.285 NVMe Readv/Writev Request test 00:21:36.285 Attached to 0000:00:10.0 00:21:36.285 Attached to 0000:00:11.0 00:21:36.285 Attached to 0000:00:13.0 00:21:36.285 Attached to 0000:00:12.0 00:21:36.285 0000:00:10.0: build_io_request_2 test passed 00:21:36.285 0000:00:10.0: build_io_request_4 test passed 00:21:36.285 0000:00:10.0: build_io_request_5 test passed 00:21:36.285 0000:00:10.0: build_io_request_6 test passed 00:21:36.285 0000:00:10.0: build_io_request_7 test passed 00:21:36.285 0000:00:10.0: build_io_request_10 test passed 00:21:36.285 0000:00:11.0: build_io_request_2 test passed 00:21:36.285 0000:00:11.0: build_io_request_4 test passed 00:21:36.285 0000:00:11.0: build_io_request_5 test passed 00:21:36.285 0000:00:11.0: build_io_request_6 test passed 00:21:36.285 0000:00:11.0: build_io_request_7 test passed 00:21:36.285 0000:00:11.0: build_io_request_10 test passed 00:21:36.285 Cleaning up... 00:21:36.285 ************************************ 00:21:36.285 END TEST nvme_sgl 00:21:36.285 ************************************ 00:21:36.285 00:21:36.285 real 0m0.446s 00:21:36.285 user 0m0.194s 00:21:36.285 sys 0m0.198s 00:21:36.285 14:40:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.285 14:40:44 -- common/autotest_common.sh@10 -- # set +x 00:21:36.544 14:40:44 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:21:36.544 14:40:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:36.544 14:40:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.544 14:40:44 -- common/autotest_common.sh@10 -- # set +x 00:21:36.544 ************************************ 00:21:36.544 START TEST nvme_e2edp 00:21:36.544 ************************************ 00:21:36.544 14:40:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:21:36.802 NVMe Write/Read with End-to-End data protection test 00:21:36.802 Attached to 0000:00:10.0 00:21:36.802 Attached to 0000:00:11.0 00:21:36.802 Attached to 0000:00:13.0 00:21:36.802 Attached to 0000:00:12.0 00:21:36.802 Cleaning up... 00:21:36.802 ************************************ 00:21:36.802 END TEST nvme_e2edp 00:21:36.802 ************************************ 00:21:36.802 00:21:36.802 real 0m0.360s 00:21:36.802 user 0m0.140s 00:21:36.802 sys 0m0.171s 00:21:36.802 14:40:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.802 14:40:45 -- common/autotest_common.sh@10 -- # set +x 00:21:36.802 14:40:45 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:21:36.802 14:40:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:36.802 14:40:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.802 14:40:45 -- common/autotest_common.sh@10 -- # set +x 00:21:37.060 ************************************ 00:21:37.060 START TEST nvme_reserve 00:21:37.060 ************************************ 00:21:37.060 14:40:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:21:37.320 ===================================================== 00:21:37.320 NVMe Controller at PCI bus 0, device 16, function 0 00:21:37.320 ===================================================== 00:21:37.320 Reservations: Not Supported 00:21:37.320 ===================================================== 00:21:37.320 NVMe Controller at PCI bus 0, device 17, function 0 00:21:37.320 ===================================================== 00:21:37.320 Reservations: Not Supported 00:21:37.320 ===================================================== 00:21:37.320 NVMe Controller at PCI bus 0, device 19, function 0 00:21:37.320 ===================================================== 00:21:37.320 Reservations: Not Supported 00:21:37.320 ===================================================== 00:21:37.320 NVMe Controller at PCI bus 0, device 18, function 0 00:21:37.320 ===================================================== 00:21:37.320 Reservations: Not Supported 00:21:37.320 Reservation test passed 00:21:37.320 ************************************ 00:21:37.320 END TEST nvme_reserve 00:21:37.320 ************************************ 00:21:37.320 00:21:37.320 real 0m0.349s 00:21:37.320 user 0m0.122s 00:21:37.320 sys 0m0.186s 00:21:37.320 14:40:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:37.320 14:40:45 -- common/autotest_common.sh@10 -- # set +x 00:21:37.320 14:40:45 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:21:37.320 14:40:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:37.320 14:40:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:37.320 14:40:45 -- common/autotest_common.sh@10 -- # set +x 00:21:37.577 ************************************ 00:21:37.577 START TEST nvme_err_injection 00:21:37.577 ************************************ 00:21:37.577 14:40:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:21:37.835 NVMe Error Injection test 00:21:37.835 Attached to 0000:00:10.0 00:21:37.835 Attached to 0000:00:11.0 00:21:37.835 Attached to 0000:00:13.0 00:21:37.835 Attached to 0000:00:12.0 00:21:37.835 0000:00:11.0: get features failed as expected 00:21:37.835 0000:00:13.0: get features failed as expected 00:21:37.835 0000:00:12.0: get features failed as expected 00:21:37.835 0000:00:10.0: get features failed as expected 00:21:37.835 0000:00:12.0: get features successfully as expected 00:21:37.835 0000:00:10.0: get features successfully as expected 00:21:37.835 0000:00:11.0: get features successfully as expected 00:21:37.835 0000:00:13.0: get features successfully as expected 00:21:37.835 0000:00:10.0: read failed as expected 00:21:37.835 0000:00:11.0: read failed as expected 00:21:37.835 0000:00:13.0: read failed as expected 00:21:37.835 0000:00:12.0: read failed as expected 00:21:37.835 0000:00:10.0: read successfully as expected 00:21:37.835 0000:00:11.0: read successfully as expected 00:21:37.835 0000:00:13.0: read successfully as expected 00:21:37.835 0000:00:12.0: read successfully as expected 00:21:37.835 Cleaning up... 00:21:37.835 ************************************ 00:21:37.835 END TEST nvme_err_injection 00:21:37.835 ************************************ 00:21:37.835 00:21:37.835 real 0m0.377s 00:21:37.835 user 0m0.149s 00:21:37.835 sys 0m0.190s 00:21:37.835 14:40:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:37.835 14:40:46 -- common/autotest_common.sh@10 -- # set +x 00:21:37.835 14:40:46 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:21:37.835 14:40:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:21:37.835 14:40:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:37.835 14:40:46 -- common/autotest_common.sh@10 -- # set +x 00:21:38.093 ************************************ 00:21:38.093 START TEST nvme_overhead 00:21:38.093 ************************************ 00:21:38.093 14:40:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:21:39.471 Initializing NVMe Controllers 00:21:39.471 Attached to 0000:00:10.0 00:21:39.471 Attached to 0000:00:11.0 00:21:39.471 Attached to 0000:00:13.0 00:21:39.471 Attached to 0000:00:12.0 00:21:39.472 Initialization complete. Launching workers. 00:21:39.472 submit (in ns) avg, min, max = 15909.2, 13068.6, 87095.2 00:21:39.472 complete (in ns) avg, min, max = 11165.2, 8616.2, 1171918.1 00:21:39.472 00:21:39.472 Submit histogram 00:21:39.472 ================ 00:21:39.472 Range in us Cumulative Count 00:21:39.472 13.044 - 13.105: 0.0104% ( 1) 00:21:39.472 13.105 - 13.166: 0.0311% ( 2) 00:21:39.472 13.166 - 13.227: 0.2075% ( 17) 00:21:39.472 13.227 - 13.288: 0.5705% ( 35) 00:21:39.472 13.288 - 13.349: 1.2241% ( 63) 00:21:39.472 13.349 - 13.410: 2.2303% ( 97) 00:21:39.472 13.410 - 13.470: 2.8423% ( 59) 00:21:39.472 13.470 - 13.531: 3.4544% ( 59) 00:21:39.472 13.531 - 13.592: 3.8589% ( 39) 00:21:39.472 13.592 - 13.653: 4.1805% ( 31) 00:21:39.472 13.653 - 13.714: 4.5124% ( 32) 00:21:39.472 13.714 - 13.775: 4.7822% ( 26) 00:21:39.472 13.775 - 13.836: 5.0934% ( 30) 00:21:39.472 13.836 - 13.897: 5.5083% ( 40) 00:21:39.472 13.897 - 13.958: 6.4108% ( 87) 00:21:39.472 13.958 - 14.019: 8.0187% ( 155) 00:21:39.472 14.019 - 14.080: 10.5913% ( 248) 00:21:39.472 14.080 - 14.141: 13.3195% ( 263) 00:21:39.472 14.141 - 14.202: 15.9544% ( 254) 00:21:39.472 14.202 - 14.263: 18.3299% ( 229) 00:21:39.472 14.263 - 14.324: 20.8506% ( 243) 00:21:39.472 14.324 - 14.385: 23.3506% ( 241) 00:21:39.472 14.385 - 14.446: 26.2863% ( 283) 00:21:39.472 14.446 - 14.507: 28.9730% ( 259) 00:21:39.472 14.507 - 14.568: 31.4212% ( 236) 00:21:39.472 14.568 - 14.629: 33.6307% ( 213) 00:21:39.472 14.629 - 14.690: 35.5187% ( 182) 00:21:39.472 14.690 - 14.750: 37.2095% ( 163) 00:21:39.472 14.750 - 14.811: 38.6929% ( 143) 00:21:39.472 14.811 - 14.872: 40.2386% ( 149) 00:21:39.472 14.872 - 14.933: 41.8983% ( 160) 00:21:39.472 14.933 - 14.994: 43.6515% ( 169) 00:21:39.472 14.994 - 15.055: 44.9066% ( 121) 00:21:39.472 15.055 - 15.116: 45.8921% ( 95) 00:21:39.472 15.116 - 15.177: 46.7946% ( 87) 00:21:39.472 15.177 - 15.238: 47.6867% ( 86) 00:21:39.472 15.238 - 15.299: 48.1224% ( 42) 00:21:39.472 15.299 - 15.360: 48.5581% ( 42) 00:21:39.472 15.360 - 15.421: 48.9523% ( 38) 00:21:39.472 15.421 - 15.482: 49.2427% ( 28) 00:21:39.472 15.482 - 15.543: 49.4502% ( 20) 00:21:39.472 15.543 - 15.604: 49.6266% ( 17) 00:21:39.472 15.604 - 15.726: 49.7614% ( 13) 00:21:39.472 15.726 - 15.848: 49.8444% ( 8) 00:21:39.472 15.848 - 15.970: 49.8963% ( 5) 00:21:39.472 15.970 - 16.091: 49.9378% ( 4) 00:21:39.472 16.091 - 16.213: 50.3734% ( 42) 00:21:39.472 16.213 - 16.335: 54.2116% ( 370) 00:21:39.472 16.335 - 16.457: 63.6929% ( 914) 00:21:39.472 16.457 - 16.579: 73.2365% ( 920) 00:21:39.472 16.579 - 16.701: 80.7780% ( 727) 00:21:39.472 16.701 - 16.823: 85.9129% ( 495) 00:21:39.472 16.823 - 16.945: 88.9523% ( 293) 00:21:39.472 16.945 - 17.067: 90.7261% ( 171) 00:21:39.472 17.067 - 17.189: 91.7116% ( 95) 00:21:39.472 17.189 - 17.310: 92.3963% ( 66) 00:21:39.472 17.310 - 17.432: 92.9772% ( 56) 00:21:39.472 17.432 - 17.554: 93.4544% ( 46) 00:21:39.472 17.554 - 17.676: 93.8174% ( 35) 00:21:39.472 17.676 - 17.798: 94.0249% ( 20) 00:21:39.472 17.798 - 17.920: 94.2427% ( 21) 00:21:39.472 17.920 - 18.042: 94.5021% ( 25) 00:21:39.472 18.042 - 18.164: 94.7095% ( 20) 00:21:39.472 18.164 - 18.286: 94.8548% ( 14) 00:21:39.472 18.286 - 18.408: 94.9481% ( 9) 00:21:39.472 18.408 - 18.530: 95.0104% ( 6) 00:21:39.472 18.530 - 18.651: 95.1037% ( 9) 00:21:39.472 18.651 - 18.773: 95.1867% ( 8) 00:21:39.472 18.773 - 18.895: 95.2178% ( 3) 00:21:39.472 18.895 - 19.017: 95.2801% ( 6) 00:21:39.472 19.017 - 19.139: 95.3112% ( 3) 00:21:39.472 19.139 - 19.261: 95.3838% ( 7) 00:21:39.472 19.261 - 19.383: 95.4149% ( 3) 00:21:39.472 19.383 - 19.505: 95.4772% ( 6) 00:21:39.472 19.505 - 19.627: 95.5290% ( 5) 00:21:39.472 19.627 - 19.749: 95.5705% ( 4) 00:21:39.472 19.749 - 19.870: 95.5913% ( 2) 00:21:39.472 19.870 - 19.992: 95.6535% ( 6) 00:21:39.472 19.992 - 20.114: 95.7054% ( 5) 00:21:39.472 20.114 - 20.236: 95.7573% ( 5) 00:21:39.472 20.236 - 20.358: 95.7988% ( 4) 00:21:39.472 20.358 - 20.480: 95.9025% ( 10) 00:21:39.472 20.480 - 20.602: 96.0685% ( 16) 00:21:39.472 20.602 - 20.724: 96.2033% ( 13) 00:21:39.472 20.724 - 20.846: 96.3382% ( 13) 00:21:39.472 20.846 - 20.968: 96.4627% ( 12) 00:21:39.472 20.968 - 21.090: 96.5456% ( 8) 00:21:39.472 21.090 - 21.211: 96.6598% ( 11) 00:21:39.472 21.211 - 21.333: 96.7531% ( 9) 00:21:39.472 21.333 - 21.455: 96.8983% ( 14) 00:21:39.472 21.455 - 21.577: 96.9917% ( 9) 00:21:39.472 21.577 - 21.699: 97.0539% ( 6) 00:21:39.472 21.699 - 21.821: 97.1369% ( 8) 00:21:39.472 21.821 - 21.943: 97.2718% ( 13) 00:21:39.472 21.943 - 22.065: 97.3029% ( 3) 00:21:39.472 22.065 - 22.187: 97.3444% ( 4) 00:21:39.472 22.187 - 22.309: 97.4274% ( 8) 00:21:39.472 22.309 - 22.430: 97.4793% ( 5) 00:21:39.472 22.430 - 22.552: 97.5000% ( 2) 00:21:39.472 22.552 - 22.674: 97.5415% ( 4) 00:21:39.472 22.674 - 22.796: 97.5519% ( 1) 00:21:39.472 22.796 - 22.918: 97.5622% ( 1) 00:21:39.472 22.918 - 23.040: 97.5726% ( 1) 00:21:39.472 23.040 - 23.162: 97.6141% ( 4) 00:21:39.472 23.162 - 23.284: 97.6245% ( 1) 00:21:39.472 23.284 - 23.406: 97.6556% ( 3) 00:21:39.472 23.406 - 23.528: 97.6660% ( 1) 00:21:39.472 23.528 - 23.650: 97.6763% ( 1) 00:21:39.472 23.650 - 23.771: 97.6867% ( 1) 00:21:39.472 23.771 - 23.893: 97.7282% ( 4) 00:21:39.472 23.893 - 24.015: 97.7905% ( 6) 00:21:39.472 24.015 - 24.137: 97.8631% ( 7) 00:21:39.472 24.137 - 24.259: 97.9461% ( 8) 00:21:39.472 24.259 - 24.381: 97.9979% ( 5) 00:21:39.472 24.381 - 24.503: 98.0498% ( 5) 00:21:39.472 24.503 - 24.625: 98.1432% ( 9) 00:21:39.472 24.625 - 24.747: 98.2573% ( 11) 00:21:39.472 24.747 - 24.869: 98.2988% ( 4) 00:21:39.472 24.869 - 24.990: 98.3921% ( 9) 00:21:39.472 24.990 - 25.112: 98.4232% ( 3) 00:21:39.472 25.112 - 25.234: 98.5062% ( 8) 00:21:39.472 25.234 - 25.356: 98.6307% ( 12) 00:21:39.472 25.356 - 25.478: 98.7137% ( 8) 00:21:39.472 25.478 - 25.600: 98.7656% ( 5) 00:21:39.472 25.600 - 25.722: 98.8174% ( 5) 00:21:39.472 25.722 - 25.844: 98.9108% ( 9) 00:21:39.472 25.844 - 25.966: 98.9523% ( 4) 00:21:39.472 25.966 - 26.088: 98.9938% ( 4) 00:21:39.472 26.088 - 26.210: 99.0560% ( 6) 00:21:39.472 26.210 - 26.331: 99.0975% ( 4) 00:21:39.472 26.331 - 26.453: 99.1183% ( 2) 00:21:39.472 26.575 - 26.697: 99.1701% ( 5) 00:21:39.472 26.941 - 27.063: 99.1805% ( 1) 00:21:39.472 27.063 - 27.185: 99.1909% ( 1) 00:21:39.472 27.185 - 27.307: 99.2220% ( 3) 00:21:39.472 27.307 - 27.429: 99.2531% ( 3) 00:21:39.472 27.429 - 27.550: 99.2842% ( 3) 00:21:39.472 27.550 - 27.672: 99.3465% ( 6) 00:21:39.472 27.672 - 27.794: 99.3776% ( 3) 00:21:39.472 27.794 - 27.916: 99.4191% ( 4) 00:21:39.472 28.038 - 28.160: 99.4295% ( 1) 00:21:39.472 28.282 - 28.404: 99.4398% ( 1) 00:21:39.472 28.404 - 28.526: 99.4710% ( 3) 00:21:39.472 28.526 - 28.648: 99.4813% ( 1) 00:21:39.472 28.648 - 28.770: 99.5021% ( 2) 00:21:39.472 28.770 - 28.891: 99.5124% ( 1) 00:21:39.472 29.135 - 29.257: 99.5228% ( 1) 00:21:39.472 29.867 - 29.989: 99.5332% ( 1) 00:21:39.472 30.110 - 30.232: 99.5436% ( 1) 00:21:39.472 30.232 - 30.354: 99.5643% ( 2) 00:21:39.472 30.354 - 30.476: 99.5851% ( 2) 00:21:39.472 30.598 - 30.720: 99.5954% ( 1) 00:21:39.472 30.720 - 30.842: 99.6266% ( 3) 00:21:39.472 30.842 - 30.964: 99.6577% ( 3) 00:21:39.472 30.964 - 31.086: 99.6784% ( 2) 00:21:39.472 31.208 - 31.451: 99.7199% ( 4) 00:21:39.472 31.451 - 31.695: 99.7407% ( 2) 00:21:39.472 32.183 - 32.427: 99.7822% ( 4) 00:21:39.472 32.427 - 32.670: 99.8237% ( 4) 00:21:39.472 33.158 - 33.402: 99.8548% ( 3) 00:21:39.472 33.646 - 33.890: 99.8755% ( 2) 00:21:39.472 34.377 - 34.621: 99.8859% ( 1) 00:21:39.472 35.596 - 35.840: 99.8963% ( 1) 00:21:39.472 37.303 - 37.547: 99.9066% ( 1) 00:21:39.472 37.790 - 38.034: 99.9170% ( 1) 00:21:39.472 38.278 - 38.522: 99.9274% ( 1) 00:21:39.472 41.935 - 42.179: 99.9378% ( 1) 00:21:39.472 43.642 - 43.886: 99.9481% ( 1) 00:21:39.472 50.469 - 50.712: 99.9585% ( 1) 00:21:39.472 50.956 - 51.200: 99.9689% ( 1) 00:21:39.472 51.444 - 51.688: 99.9793% ( 1) 00:21:39.472 79.970 - 80.457: 99.9896% ( 1) 00:21:39.472 86.796 - 87.284: 100.0000% ( 1) 00:21:39.472 00:21:39.472 Complete histogram 00:21:39.472 ================== 00:21:39.472 Range in us Cumulative Count 00:21:39.472 8.594 - 8.655: 0.0519% ( 5) 00:21:39.472 8.655 - 8.716: 0.3527% ( 29) 00:21:39.472 8.716 - 8.777: 0.5394% ( 18) 00:21:39.472 8.777 - 8.838: 0.6432% ( 10) 00:21:39.472 8.838 - 8.899: 0.7054% ( 6) 00:21:39.472 8.899 - 8.960: 0.8299% ( 12) 00:21:39.472 8.960 - 9.021: 1.2241% ( 38) 00:21:39.472 9.021 - 9.082: 1.8257% ( 58) 00:21:39.473 9.082 - 9.143: 2.6349% ( 78) 00:21:39.473 9.143 - 9.204: 4.3361% ( 164) 00:21:39.473 9.204 - 9.265: 8.5373% ( 405) 00:21:39.473 9.265 - 9.326: 11.7635% ( 311) 00:21:39.473 9.326 - 9.387: 13.7448% ( 191) 00:21:39.473 9.387 - 9.448: 14.8029% ( 102) 00:21:39.473 9.448 - 9.509: 16.0062% ( 116) 00:21:39.473 9.509 - 9.570: 18.9627% ( 285) 00:21:39.473 9.570 - 9.630: 22.4378% ( 335) 00:21:39.473 9.630 - 9.691: 25.2905% ( 275) 00:21:39.473 9.691 - 9.752: 27.6556% ( 228) 00:21:39.473 9.752 - 9.813: 29.6162% ( 189) 00:21:39.473 9.813 - 9.874: 31.9710% ( 227) 00:21:39.473 9.874 - 9.935: 34.7822% ( 271) 00:21:39.473 9.935 - 9.996: 37.1577% ( 229) 00:21:39.473 9.996 - 10.057: 39.0560% ( 183) 00:21:39.473 10.057 - 10.118: 40.5602% ( 145) 00:21:39.473 10.118 - 10.179: 41.6701% ( 107) 00:21:39.473 10.179 - 10.240: 42.5726% ( 87) 00:21:39.473 10.240 - 10.301: 43.6722% ( 106) 00:21:39.473 10.301 - 10.362: 44.9066% ( 119) 00:21:39.473 10.362 - 10.423: 46.0788% ( 113) 00:21:39.473 10.423 - 10.484: 47.1992% ( 108) 00:21:39.473 10.484 - 10.545: 48.1224% ( 89) 00:21:39.473 10.545 - 10.606: 49.0560% ( 90) 00:21:39.473 10.606 - 10.667: 49.8755% ( 79) 00:21:39.473 10.667 - 10.728: 50.3734% ( 48) 00:21:39.473 10.728 - 10.789: 50.8402% ( 45) 00:21:39.473 10.789 - 10.850: 51.1411% ( 29) 00:21:39.473 10.850 - 10.910: 51.3174% ( 17) 00:21:39.473 10.910 - 10.971: 51.4108% ( 9) 00:21:39.473 10.971 - 11.032: 51.5456% ( 13) 00:21:39.473 11.032 - 11.093: 52.0332% ( 47) 00:21:39.473 11.093 - 11.154: 52.7386% ( 68) 00:21:39.473 11.154 - 11.215: 53.3299% ( 57) 00:21:39.473 11.215 - 11.276: 53.9108% ( 56) 00:21:39.473 11.276 - 11.337: 54.6473% ( 71) 00:21:39.473 11.337 - 11.398: 56.7324% ( 201) 00:21:39.473 11.398 - 11.459: 60.9751% ( 409) 00:21:39.473 11.459 - 11.520: 66.7842% ( 560) 00:21:39.473 11.520 - 11.581: 73.6100% ( 658) 00:21:39.473 11.581 - 11.642: 79.1598% ( 535) 00:21:39.473 11.642 - 11.703: 84.0145% ( 468) 00:21:39.473 11.703 - 11.764: 87.2822% ( 315) 00:21:39.473 11.764 - 11.825: 89.6992% ( 233) 00:21:39.473 11.825 - 11.886: 91.6494% ( 188) 00:21:39.473 11.886 - 11.947: 92.8527% ( 116) 00:21:39.473 11.947 - 12.008: 93.8797% ( 99) 00:21:39.473 12.008 - 12.069: 94.6266% ( 72) 00:21:39.473 12.069 - 12.130: 95.1141% ( 47) 00:21:39.473 12.130 - 12.190: 95.4979% ( 37) 00:21:39.473 12.190 - 12.251: 95.7054% ( 20) 00:21:39.473 12.251 - 12.312: 95.8506% ( 14) 00:21:39.473 12.312 - 12.373: 95.9544% ( 10) 00:21:39.473 12.373 - 12.434: 96.0581% ( 10) 00:21:39.473 12.434 - 12.495: 96.1929% ( 13) 00:21:39.473 12.495 - 12.556: 96.2552% ( 6) 00:21:39.473 12.556 - 12.617: 96.3071% ( 5) 00:21:39.473 12.617 - 12.678: 96.3485% ( 4) 00:21:39.473 12.678 - 12.739: 96.4004% ( 5) 00:21:39.473 12.739 - 12.800: 96.4627% ( 6) 00:21:39.473 12.800 - 12.861: 96.5145% ( 5) 00:21:39.473 12.922 - 12.983: 96.5353% ( 2) 00:21:39.473 12.983 - 13.044: 96.5560% ( 2) 00:21:39.473 13.105 - 13.166: 96.5768% ( 2) 00:21:39.473 13.166 - 13.227: 96.5871% ( 1) 00:21:39.473 13.227 - 13.288: 96.6079% ( 2) 00:21:39.473 13.470 - 13.531: 96.6286% ( 2) 00:21:39.473 13.531 - 13.592: 96.6494% ( 2) 00:21:39.473 13.592 - 13.653: 96.6701% ( 2) 00:21:39.473 13.714 - 13.775: 96.6805% ( 1) 00:21:39.473 14.019 - 14.080: 96.6909% ( 1) 00:21:39.473 14.080 - 14.141: 96.7012% ( 1) 00:21:39.473 14.324 - 14.385: 96.7116% ( 1) 00:21:39.473 14.385 - 14.446: 96.7324% ( 2) 00:21:39.473 14.446 - 14.507: 96.7635% ( 3) 00:21:39.473 14.568 - 14.629: 96.7842% ( 2) 00:21:39.473 14.629 - 14.690: 96.7946% ( 1) 00:21:39.473 14.750 - 14.811: 96.8050% ( 1) 00:21:39.473 14.811 - 14.872: 96.8154% ( 1) 00:21:39.473 14.872 - 14.933: 96.8361% ( 2) 00:21:39.473 14.994 - 15.055: 96.8465% ( 1) 00:21:39.473 15.055 - 15.116: 96.8568% ( 1) 00:21:39.473 15.116 - 15.177: 96.8880% ( 3) 00:21:39.473 15.177 - 15.238: 96.8983% ( 1) 00:21:39.473 15.299 - 15.360: 96.9191% ( 2) 00:21:39.473 15.360 - 15.421: 96.9295% ( 1) 00:21:39.473 15.421 - 15.482: 96.9502% ( 2) 00:21:39.473 15.543 - 15.604: 96.9813% ( 3) 00:21:39.473 15.604 - 15.726: 97.0124% ( 3) 00:21:39.473 15.726 - 15.848: 97.0436% ( 3) 00:21:39.473 15.848 - 15.970: 97.0851% ( 4) 00:21:39.473 15.970 - 16.091: 97.1680% ( 8) 00:21:39.473 16.091 - 16.213: 97.2199% ( 5) 00:21:39.473 16.213 - 16.335: 97.2925% ( 7) 00:21:39.473 16.335 - 16.457: 97.3340% ( 4) 00:21:39.473 16.457 - 16.579: 97.3755% ( 4) 00:21:39.473 16.579 - 16.701: 97.4378% ( 6) 00:21:39.473 16.701 - 16.823: 97.4896% ( 5) 00:21:39.473 16.823 - 16.945: 97.5622% ( 7) 00:21:39.473 16.945 - 17.067: 97.6141% ( 5) 00:21:39.473 17.067 - 17.189: 97.6971% ( 8) 00:21:39.473 17.189 - 17.310: 97.7178% ( 2) 00:21:39.473 17.310 - 17.432: 97.7386% ( 2) 00:21:39.473 17.432 - 17.554: 97.8008% ( 6) 00:21:39.473 17.554 - 17.676: 97.8216% ( 2) 00:21:39.473 17.676 - 17.798: 97.8320% ( 1) 00:21:39.473 17.798 - 17.920: 97.8527% ( 2) 00:21:39.473 17.920 - 18.042: 97.8734% ( 2) 00:21:39.473 18.042 - 18.164: 97.8838% ( 1) 00:21:39.473 18.164 - 18.286: 97.9046% ( 2) 00:21:39.473 18.286 - 18.408: 97.9149% ( 1) 00:21:39.473 18.530 - 18.651: 97.9461% ( 3) 00:21:39.473 18.651 - 18.773: 97.9564% ( 1) 00:21:39.473 18.773 - 18.895: 97.9876% ( 3) 00:21:39.473 18.895 - 19.017: 98.0187% ( 3) 00:21:39.473 19.017 - 19.139: 98.0705% ( 5) 00:21:39.473 19.139 - 19.261: 98.1432% ( 7) 00:21:39.473 19.261 - 19.383: 98.1950% ( 5) 00:21:39.473 19.383 - 19.505: 98.2573% ( 6) 00:21:39.473 19.505 - 19.627: 98.2884% ( 3) 00:21:39.473 19.627 - 19.749: 98.3195% ( 3) 00:21:39.473 19.749 - 19.870: 98.3714% ( 5) 00:21:39.473 19.870 - 19.992: 98.4440% ( 7) 00:21:39.473 19.992 - 20.114: 98.5166% ( 7) 00:21:39.473 20.114 - 20.236: 98.5581% ( 4) 00:21:39.473 20.236 - 20.358: 98.6100% ( 5) 00:21:39.473 20.358 - 20.480: 98.7137% ( 10) 00:21:39.473 20.480 - 20.602: 98.7863% ( 7) 00:21:39.473 20.602 - 20.724: 98.8589% ( 7) 00:21:39.473 20.724 - 20.846: 98.8797% ( 2) 00:21:39.473 20.846 - 20.968: 98.9212% ( 4) 00:21:39.473 20.968 - 21.090: 98.9834% ( 6) 00:21:39.473 21.090 - 21.211: 99.0353% ( 5) 00:21:39.473 21.211 - 21.333: 99.0768% ( 4) 00:21:39.473 21.333 - 21.455: 99.1079% ( 3) 00:21:39.473 21.455 - 21.577: 99.1390% ( 3) 00:21:39.473 21.577 - 21.699: 99.1494% ( 1) 00:21:39.473 21.699 - 21.821: 99.1805% ( 3) 00:21:39.473 21.821 - 21.943: 99.2116% ( 3) 00:21:39.473 21.943 - 22.065: 99.2324% ( 2) 00:21:39.473 22.065 - 22.187: 99.2427% ( 1) 00:21:39.473 22.187 - 22.309: 99.2842% ( 4) 00:21:39.473 22.309 - 22.430: 99.3050% ( 2) 00:21:39.473 22.430 - 22.552: 99.3257% ( 2) 00:21:39.473 22.552 - 22.674: 99.3361% ( 1) 00:21:39.473 22.918 - 23.040: 99.3568% ( 2) 00:21:39.473 23.406 - 23.528: 99.3672% ( 1) 00:21:39.473 23.893 - 24.015: 99.3880% ( 2) 00:21:39.473 24.381 - 24.503: 99.3983% ( 1) 00:21:39.473 24.625 - 24.747: 99.4295% ( 3) 00:21:39.473 25.478 - 25.600: 99.4398% ( 1) 00:21:39.473 25.600 - 25.722: 99.4502% ( 1) 00:21:39.473 25.844 - 25.966: 99.4813% ( 3) 00:21:39.473 25.966 - 26.088: 99.5021% ( 2) 00:21:39.473 26.088 - 26.210: 99.5228% ( 2) 00:21:39.473 26.210 - 26.331: 99.5436% ( 2) 00:21:39.473 26.331 - 26.453: 99.5747% ( 3) 00:21:39.473 26.453 - 26.575: 99.5851% ( 1) 00:21:39.473 26.575 - 26.697: 99.6058% ( 2) 00:21:39.473 26.697 - 26.819: 99.6162% ( 1) 00:21:39.473 26.819 - 26.941: 99.6473% ( 3) 00:21:39.473 27.185 - 27.307: 99.6680% ( 2) 00:21:39.473 27.307 - 27.429: 99.6784% ( 1) 00:21:39.473 27.429 - 27.550: 99.7199% ( 4) 00:21:39.473 27.550 - 27.672: 99.7303% ( 1) 00:21:39.473 27.672 - 27.794: 99.7407% ( 1) 00:21:39.473 28.038 - 28.160: 99.7822% ( 4) 00:21:39.473 28.160 - 28.282: 99.8133% ( 3) 00:21:39.473 28.648 - 28.770: 99.8237% ( 1) 00:21:39.473 30.476 - 30.598: 99.8340% ( 1) 00:21:39.473 30.842 - 30.964: 99.8444% ( 1) 00:21:39.473 31.451 - 31.695: 99.8548% ( 1) 00:21:39.473 31.695 - 31.939: 99.8755% ( 2) 00:21:39.473 32.670 - 32.914: 99.8859% ( 1) 00:21:39.473 38.278 - 38.522: 99.8963% ( 1) 00:21:39.473 40.229 - 40.472: 99.9066% ( 1) 00:21:39.473 40.716 - 40.960: 99.9170% ( 1) 00:21:39.473 46.080 - 46.324: 99.9274% ( 1) 00:21:39.473 48.518 - 48.762: 99.9378% ( 1) 00:21:39.473 59.977 - 60.221: 99.9481% ( 1) 00:21:39.473 83.870 - 84.358: 99.9585% ( 1) 00:21:39.473 85.821 - 86.309: 99.9689% ( 1) 00:21:39.473 218.453 - 219.429: 99.9793% ( 1) 00:21:39.473 585.143 - 589.044: 99.9896% ( 1) 00:21:39.473 1170.286 - 1178.088: 100.0000% ( 1) 00:21:39.473 00:21:39.473 ************************************ 00:21:39.473 END TEST nvme_overhead 00:21:39.473 ************************************ 00:21:39.473 00:21:39.473 real 0m1.370s 00:21:39.474 user 0m1.107s 00:21:39.474 sys 0m0.203s 00:21:39.474 14:40:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:39.474 14:40:47 -- common/autotest_common.sh@10 -- # set +x 00:21:39.474 14:40:47 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:21:39.474 14:40:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:39.474 14:40:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.474 14:40:47 -- common/autotest_common.sh@10 -- # set +x 00:21:39.474 ************************************ 00:21:39.474 START TEST nvme_arbitration 00:21:39.474 ************************************ 00:21:39.474 14:40:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:21:43.689 Initializing NVMe Controllers 00:21:43.689 Attached to 0000:00:10.0 00:21:43.689 Attached to 0000:00:11.0 00:21:43.689 Attached to 0000:00:13.0 00:21:43.689 Attached to 0000:00:12.0 00:21:43.689 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:21:43.689 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:21:43.689 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:21:43.689 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:21:43.689 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:21:43.689 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:21:43.689 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:21:43.689 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:21:43.689 Initialization complete. Launching workers. 00:21:43.689 Starting thread on core 1 with urgent priority queue 00:21:43.689 Starting thread on core 2 with urgent priority queue 00:21:43.689 Starting thread on core 3 with urgent priority queue 00:21:43.689 Starting thread on core 0 with urgent priority queue 00:21:43.689 QEMU NVMe Ctrl (12340 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:21:43.689 QEMU NVMe Ctrl (12342 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:21:43.689 QEMU NVMe Ctrl (12341 ) core 1: 490.67 IO/s 203.80 secs/100000 ios 00:21:43.689 QEMU NVMe Ctrl (12342 ) core 1: 490.67 IO/s 203.80 secs/100000 ios 00:21:43.689 QEMU NVMe Ctrl (12343 ) core 2: 469.33 IO/s 213.07 secs/100000 ios 00:21:43.689 QEMU NVMe Ctrl (12342 ) core 3: 448.00 IO/s 223.21 secs/100000 ios 00:21:43.689 ======================================================== 00:21:43.689 00:21:43.689 ************************************ 00:21:43.689 END TEST nvme_arbitration 00:21:43.689 ************************************ 00:21:43.689 00:21:43.689 real 0m3.517s 00:21:43.689 user 0m9.458s 00:21:43.689 sys 0m0.206s 00:21:43.689 14:40:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:43.689 14:40:51 -- common/autotest_common.sh@10 -- # set +x 00:21:43.689 14:40:51 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:21:43.689 14:40:51 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:43.689 14:40:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.689 14:40:51 -- common/autotest_common.sh@10 -- # set +x 00:21:43.689 ************************************ 00:21:43.689 START TEST nvme_single_aen 00:21:43.689 ************************************ 00:21:43.689 14:40:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:21:43.689 Asynchronous Event Request test 00:21:43.689 Attached to 0000:00:10.0 00:21:43.689 Attached to 0000:00:11.0 00:21:43.689 Attached to 0000:00:13.0 00:21:43.689 Attached to 0000:00:12.0 00:21:43.689 Reset controller to setup AER completions for this process 00:21:43.689 Registering asynchronous event callbacks... 00:21:43.689 Getting orig temperature thresholds of all controllers 00:21:43.689 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:43.689 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:43.689 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:43.689 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:43.689 Setting all controllers temperature threshold low to trigger AER 00:21:43.689 Waiting for all controllers temperature threshold to be set lower 00:21:43.689 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:43.689 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:21:43.689 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:43.689 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:21:43.689 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:43.689 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:21:43.689 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:43.689 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:21:43.689 Waiting for all controllers to trigger AER and reset threshold 00:21:43.689 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:43.689 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:43.689 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:43.689 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:43.689 Cleaning up... 00:21:43.689 ************************************ 00:21:43.689 END TEST nvme_single_aen 00:21:43.689 ************************************ 00:21:43.689 00:21:43.689 real 0m0.305s 00:21:43.689 user 0m0.095s 00:21:43.689 sys 0m0.160s 00:21:43.689 14:40:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:43.689 14:40:51 -- common/autotest_common.sh@10 -- # set +x 00:21:43.689 14:40:51 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:21:43.689 14:40:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:43.689 14:40:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.689 14:40:51 -- common/autotest_common.sh@10 -- # set +x 00:21:43.689 ************************************ 00:21:43.689 START TEST nvme_doorbell_aers 00:21:43.689 ************************************ 00:21:43.689 14:40:52 -- common/autotest_common.sh@1111 -- # nvme_doorbell_aers 00:21:43.689 14:40:52 -- nvme/nvme.sh@70 -- # bdfs=() 00:21:43.689 14:40:52 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:21:43.689 14:40:52 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:21:43.689 14:40:52 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:21:43.689 14:40:52 -- common/autotest_common.sh@1499 -- # bdfs=() 00:21:43.689 14:40:52 -- common/autotest_common.sh@1499 -- # local bdfs 00:21:43.689 14:40:52 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:43.689 14:40:52 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:43.689 14:40:52 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:21:43.689 14:40:52 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:21:43.689 14:40:52 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:21:43.689 14:40:52 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:21:43.689 14:40:52 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:43.948 [2024-04-17 14:40:52.441178] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:21:53.935 Executing: test_write_invalid_db 00:21:53.935 Waiting for AER completion... 00:21:53.935 Failure: test_write_invalid_db 00:21:53.935 00:21:53.935 Executing: test_invalid_db_write_overflow_sq 00:21:53.935 Waiting for AER completion... 00:21:53.935 Failure: test_invalid_db_write_overflow_sq 00:21:53.935 00:21:53.935 Executing: test_invalid_db_write_overflow_cq 00:21:53.935 Waiting for AER completion... 00:21:53.935 Failure: test_invalid_db_write_overflow_cq 00:21:53.935 00:21:53.935 14:41:02 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:21:53.935 14:41:02 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:21:53.935 [2024-04-17 14:41:02.434156] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:03.905 Executing: test_write_invalid_db 00:22:03.905 Waiting for AER completion... 00:22:03.905 Failure: test_write_invalid_db 00:22:03.905 00:22:03.905 Executing: test_invalid_db_write_overflow_sq 00:22:03.905 Waiting for AER completion... 00:22:03.905 Failure: test_invalid_db_write_overflow_sq 00:22:03.905 00:22:03.905 Executing: test_invalid_db_write_overflow_cq 00:22:03.905 Waiting for AER completion... 00:22:03.905 Failure: test_invalid_db_write_overflow_cq 00:22:03.905 00:22:03.905 14:41:12 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:22:03.905 14:41:12 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:22:04.162 [2024-04-17 14:41:12.567293] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:14.165 Executing: test_write_invalid_db 00:22:14.165 Waiting for AER completion... 00:22:14.165 Failure: test_write_invalid_db 00:22:14.165 00:22:14.165 Executing: test_invalid_db_write_overflow_sq 00:22:14.165 Waiting for AER completion... 00:22:14.165 Failure: test_invalid_db_write_overflow_sq 00:22:14.165 00:22:14.165 Executing: test_invalid_db_write_overflow_cq 00:22:14.165 Waiting for AER completion... 00:22:14.165 Failure: test_invalid_db_write_overflow_cq 00:22:14.165 00:22:14.165 14:41:22 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:22:14.165 14:41:22 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:22:14.165 [2024-04-17 14:41:22.620404] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 Executing: test_write_invalid_db 00:22:24.205 Waiting for AER completion... 00:22:24.205 Failure: test_write_invalid_db 00:22:24.205 00:22:24.205 Executing: test_invalid_db_write_overflow_sq 00:22:24.205 Waiting for AER completion... 00:22:24.205 Failure: test_invalid_db_write_overflow_sq 00:22:24.205 00:22:24.205 Executing: test_invalid_db_write_overflow_cq 00:22:24.205 Waiting for AER completion... 00:22:24.205 Failure: test_invalid_db_write_overflow_cq 00:22:24.205 00:22:24.205 ************************************ 00:22:24.205 END TEST nvme_doorbell_aers 00:22:24.205 ************************************ 00:22:24.205 00:22:24.205 real 0m40.342s 00:22:24.205 user 0m29.074s 00:22:24.205 sys 0m10.866s 00:22:24.205 14:41:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:24.205 14:41:32 -- common/autotest_common.sh@10 -- # set +x 00:22:24.205 14:41:32 -- nvme/nvme.sh@97 -- # uname 00:22:24.205 14:41:32 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:22:24.205 14:41:32 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:22:24.205 14:41:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:24.205 14:41:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:24.205 14:41:32 -- common/autotest_common.sh@10 -- # set +x 00:22:24.205 ************************************ 00:22:24.205 START TEST nvme_multi_aen 00:22:24.205 ************************************ 00:22:24.205 14:41:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:22:24.205 [2024-04-17 14:41:32.737539] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 [2024-04-17 14:41:32.737843] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 [2024-04-17 14:41:32.738003] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 [2024-04-17 14:41:32.739739] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 [2024-04-17 14:41:32.739919] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 [2024-04-17 14:41:32.740040] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 [2024-04-17 14:41:32.741550] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 [2024-04-17 14:41:32.741735] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 [2024-04-17 14:41:32.741850] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 [2024-04-17 14:41:32.743468] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 [2024-04-17 14:41:32.743678] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 [2024-04-17 14:41:32.743815] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70355) is not found. Dropping the request. 00:22:24.205 Child process pid: 70886 00:22:24.463 [Child] Asynchronous Event Request test 00:22:24.463 [Child] Attached to 0000:00:10.0 00:22:24.463 [Child] Attached to 0000:00:11.0 00:22:24.463 [Child] Attached to 0000:00:13.0 00:22:24.463 [Child] Attached to 0000:00:12.0 00:22:24.463 [Child] Registering asynchronous event callbacks... 00:22:24.463 [Child] Getting orig temperature thresholds of all controllers 00:22:24.463 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:24.463 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:24.463 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:24.463 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:24.463 [Child] Waiting for all controllers to trigger AER and reset threshold 00:22:24.463 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:24.463 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:24.463 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:24.463 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:24.463 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:24.463 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:24.463 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:24.463 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:24.463 [Child] Cleaning up... 00:22:24.721 Asynchronous Event Request test 00:22:24.721 Attached to 0000:00:10.0 00:22:24.721 Attached to 0000:00:11.0 00:22:24.721 Attached to 0000:00:13.0 00:22:24.721 Attached to 0000:00:12.0 00:22:24.721 Reset controller to setup AER completions for this process 00:22:24.721 Registering asynchronous event callbacks... 00:22:24.721 Getting orig temperature thresholds of all controllers 00:22:24.721 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:24.722 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:24.722 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:24.722 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:24.722 Setting all controllers temperature threshold low to trigger AER 00:22:24.722 Waiting for all controllers temperature threshold to be set lower 00:22:24.722 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:24.722 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:22:24.722 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:24.722 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:22:24.722 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:24.722 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:22:24.722 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:24.722 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:22:24.722 Waiting for all controllers to trigger AER and reset threshold 00:22:24.722 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:24.722 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:24.722 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:24.722 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:24.722 Cleaning up... 00:22:24.722 ************************************ 00:22:24.722 END TEST nvme_multi_aen 00:22:24.722 ************************************ 00:22:24.722 00:22:24.722 real 0m0.639s 00:22:24.722 user 0m0.217s 00:22:24.722 sys 0m0.314s 00:22:24.722 14:41:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:24.722 14:41:33 -- common/autotest_common.sh@10 -- # set +x 00:22:24.722 14:41:33 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:22:24.722 14:41:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:22:24.722 14:41:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:24.722 14:41:33 -- common/autotest_common.sh@10 -- # set +x 00:22:24.722 ************************************ 00:22:24.722 START TEST nvme_startup 00:22:24.722 ************************************ 00:22:24.722 14:41:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:22:25.286 Initializing NVMe Controllers 00:22:25.286 Attached to 0000:00:10.0 00:22:25.286 Attached to 0000:00:11.0 00:22:25.286 Attached to 0000:00:13.0 00:22:25.286 Attached to 0000:00:12.0 00:22:25.286 Initialization complete. 00:22:25.286 Time used:240925.094 (us). 00:22:25.286 ************************************ 00:22:25.286 END TEST nvme_startup 00:22:25.286 ************************************ 00:22:25.286 00:22:25.286 real 0m0.353s 00:22:25.286 user 0m0.126s 00:22:25.286 sys 0m0.164s 00:22:25.286 14:41:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:25.286 14:41:33 -- common/autotest_common.sh@10 -- # set +x 00:22:25.286 14:41:33 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:22:25.286 14:41:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:25.286 14:41:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:25.286 14:41:33 -- common/autotest_common.sh@10 -- # set +x 00:22:25.286 ************************************ 00:22:25.286 START TEST nvme_multi_secondary 00:22:25.286 ************************************ 00:22:25.286 14:41:33 -- common/autotest_common.sh@1111 -- # nvme_multi_secondary 00:22:25.286 14:41:33 -- nvme/nvme.sh@52 -- # pid0=70950 00:22:25.286 14:41:33 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:22:25.286 14:41:33 -- nvme/nvme.sh@54 -- # pid1=70951 00:22:25.286 14:41:33 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:22:25.286 14:41:33 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:22:28.563 Initializing NVMe Controllers 00:22:28.563 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:28.563 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:28.563 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:28.563 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:28.563 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:22:28.563 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:22:28.563 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:22:28.563 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:22:28.563 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:22:28.563 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:22:28.563 Initialization complete. Launching workers. 00:22:28.563 ======================================================== 00:22:28.563 Latency(us) 00:22:28.563 Device Information : IOPS MiB/s Average min max 00:22:28.563 PCIE (0000:00:10.0) NSID 1 from core 1: 4751.23 18.56 3365.83 1083.82 8969.31 00:22:28.564 PCIE (0000:00:11.0) NSID 1 from core 1: 4751.23 18.56 3367.38 1141.74 9115.19 00:22:28.564 PCIE (0000:00:13.0) NSID 1 from core 1: 4751.23 18.56 3367.42 1113.64 9393.50 00:22:28.564 PCIE (0000:00:12.0) NSID 1 from core 1: 4751.23 18.56 3367.52 1091.04 7626.57 00:22:28.564 PCIE (0000:00:12.0) NSID 2 from core 1: 4751.23 18.56 3367.55 1089.64 8407.79 00:22:28.564 PCIE (0000:00:12.0) NSID 3 from core 1: 4756.56 18.58 3363.83 1110.33 8832.25 00:22:28.564 ======================================================== 00:22:28.564 Total : 28512.70 111.38 3366.59 1083.82 9393.50 00:22:28.564 00:22:29.128 Initializing NVMe Controllers 00:22:29.128 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:29.128 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:29.128 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:29.128 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:29.128 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:22:29.128 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:22:29.128 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:22:29.128 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:22:29.128 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:22:29.128 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:22:29.128 Initialization complete. Launching workers. 00:22:29.128 ======================================================== 00:22:29.128 Latency(us) 00:22:29.128 Device Information : IOPS MiB/s Average min max 00:22:29.128 PCIE (0000:00:10.0) NSID 1 from core 2: 2100.59 8.21 7614.79 1668.47 25778.32 00:22:29.128 PCIE (0000:00:11.0) NSID 1 from core 2: 2100.59 8.21 7616.59 1836.23 25428.73 00:22:29.128 PCIE (0000:00:13.0) NSID 1 from core 2: 2100.59 8.21 7614.43 1756.69 25294.96 00:22:29.128 PCIE (0000:00:12.0) NSID 1 from core 2: 2100.59 8.21 7616.38 1661.34 24988.08 00:22:29.128 PCIE (0000:00:12.0) NSID 2 from core 2: 2100.59 8.21 7614.19 1691.30 25965.05 00:22:29.128 PCIE (0000:00:12.0) NSID 3 from core 2: 2100.59 8.21 7616.59 1544.40 25845.82 00:22:29.128 ======================================================== 00:22:29.128 Total : 12603.55 49.23 7615.50 1544.40 25965.05 00:22:29.128 00:22:29.128 14:41:37 -- nvme/nvme.sh@56 -- # wait 70950 00:22:31.030 Initializing NVMe Controllers 00:22:31.030 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:31.030 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:31.030 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:31.030 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:31.030 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:31.030 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:22:31.030 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:22:31.030 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:22:31.030 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:22:31.030 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:22:31.030 Initialization complete. Launching workers. 00:22:31.030 ======================================================== 00:22:31.030 Latency(us) 00:22:31.030 Device Information : IOPS MiB/s Average min max 00:22:31.030 PCIE (0000:00:10.0) NSID 1 from core 0: 7730.87 30.20 2068.01 953.02 14756.88 00:22:31.030 PCIE (0000:00:11.0) NSID 1 from core 0: 7730.87 30.20 2069.14 980.39 16775.39 00:22:31.030 PCIE (0000:00:13.0) NSID 1 from core 0: 7730.87 30.20 2069.09 927.03 16374.17 00:22:31.030 PCIE (0000:00:12.0) NSID 1 from core 0: 7730.87 30.20 2069.05 881.84 15912.83 00:22:31.031 PCIE (0000:00:12.0) NSID 2 from core 0: 7730.87 30.20 2069.00 844.08 15661.70 00:22:31.031 PCIE (0000:00:12.0) NSID 3 from core 0: 7730.87 30.20 2068.95 793.08 15427.62 00:22:31.031 ======================================================== 00:22:31.031 Total : 46385.20 181.19 2068.87 793.08 16775.39 00:22:31.031 00:22:31.031 14:41:39 -- nvme/nvme.sh@57 -- # wait 70951 00:22:31.031 14:41:39 -- nvme/nvme.sh@61 -- # pid0=71019 00:22:31.031 14:41:39 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:22:31.031 14:41:39 -- nvme/nvme.sh@63 -- # pid1=71020 00:22:31.031 14:41:39 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:22:31.031 14:41:39 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:22:34.315 Initializing NVMe Controllers 00:22:34.315 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:34.315 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:34.315 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:34.315 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:34.315 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:22:34.315 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:22:34.315 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:22:34.315 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:22:34.315 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:22:34.315 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:22:34.315 Initialization complete. Launching workers. 00:22:34.315 ======================================================== 00:22:34.315 Latency(us) 00:22:34.315 Device Information : IOPS MiB/s Average min max 00:22:34.315 PCIE (0000:00:10.0) NSID 1 from core 1: 5033.88 19.66 3176.73 1181.40 6782.34 00:22:34.315 PCIE (0000:00:11.0) NSID 1 from core 1: 5033.88 19.66 3178.57 1209.96 8444.28 00:22:34.315 PCIE (0000:00:13.0) NSID 1 from core 1: 5033.88 19.66 3178.86 1198.75 7728.01 00:22:34.315 PCIE (0000:00:12.0) NSID 1 from core 1: 5039.21 19.68 3175.69 1204.01 7500.07 00:22:34.315 PCIE (0000:00:12.0) NSID 2 from core 1: 5039.21 19.68 3175.90 1175.04 6976.45 00:22:34.315 PCIE (0000:00:12.0) NSID 3 from core 1: 5039.21 19.68 3176.00 1202.91 7097.40 00:22:34.315 ======================================================== 00:22:34.315 Total : 30219.25 118.04 3176.96 1175.04 8444.28 00:22:34.315 00:22:34.315 Initializing NVMe Controllers 00:22:34.315 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:34.315 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:34.315 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:34.315 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:34.315 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:34.315 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:22:34.315 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:22:34.315 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:22:34.315 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:22:34.315 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:22:34.315 Initialization complete. Launching workers. 00:22:34.315 ======================================================== 00:22:34.315 Latency(us) 00:22:34.315 Device Information : IOPS MiB/s Average min max 00:22:34.315 PCIE (0000:00:10.0) NSID 1 from core 0: 4932.51 19.27 3241.94 1190.58 17413.38 00:22:34.315 PCIE (0000:00:11.0) NSID 1 from core 0: 4932.51 19.27 3243.15 1219.03 17448.45 00:22:34.315 PCIE (0000:00:13.0) NSID 1 from core 0: 4932.51 19.27 3243.06 1121.28 17331.20 00:22:34.315 PCIE (0000:00:12.0) NSID 1 from core 0: 4932.51 19.27 3242.97 1034.00 16603.89 00:22:34.315 PCIE (0000:00:12.0) NSID 2 from core 0: 4932.51 19.27 3242.88 969.45 16995.26 00:22:34.315 PCIE (0000:00:12.0) NSID 3 from core 0: 4932.51 19.27 3242.59 909.17 17203.04 00:22:34.315 ======================================================== 00:22:34.315 Total : 29595.09 115.61 3242.77 909.17 17448.45 00:22:34.315 00:22:36.847 Initializing NVMe Controllers 00:22:36.847 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:36.847 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:36.847 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:36.847 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:36.847 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:22:36.847 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:22:36.847 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:22:36.847 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:22:36.847 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:22:36.847 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:22:36.847 Initialization complete. Launching workers. 00:22:36.847 ======================================================== 00:22:36.847 Latency(us) 00:22:36.847 Device Information : IOPS MiB/s Average min max 00:22:36.847 PCIE (0000:00:10.0) NSID 1 from core 2: 3260.58 12.74 4905.66 1080.17 19091.00 00:22:36.847 PCIE (0000:00:11.0) NSID 1 from core 2: 3260.58 12.74 4906.61 1105.57 18988.24 00:22:36.847 PCIE (0000:00:13.0) NSID 1 from core 2: 3260.58 12.74 4906.49 1061.80 19434.45 00:22:36.847 PCIE (0000:00:12.0) NSID 1 from core 2: 3260.58 12.74 4906.37 1069.09 19648.50 00:22:36.847 PCIE (0000:00:12.0) NSID 2 from core 2: 3260.58 12.74 4906.25 1018.04 19445.46 00:22:36.847 PCIE (0000:00:12.0) NSID 3 from core 2: 3260.58 12.74 4906.12 833.86 19079.43 00:22:36.847 ======================================================== 00:22:36.847 Total : 19563.49 76.42 4906.25 833.86 19648.50 00:22:36.847 00:22:36.847 ************************************ 00:22:36.847 END TEST nvme_multi_secondary 00:22:36.847 ************************************ 00:22:36.847 14:41:44 -- nvme/nvme.sh@65 -- # wait 71019 00:22:36.847 14:41:44 -- nvme/nvme.sh@66 -- # wait 71020 00:22:36.847 00:22:36.847 real 0m11.145s 00:22:36.847 user 0m18.563s 00:22:36.847 sys 0m1.122s 00:22:36.847 14:41:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:36.847 14:41:44 -- common/autotest_common.sh@10 -- # set +x 00:22:36.847 14:41:44 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:22:36.847 14:41:44 -- nvme/nvme.sh@102 -- # kill_stub 00:22:36.847 14:41:44 -- common/autotest_common.sh@1075 -- # [[ -e /proc/69876 ]] 00:22:36.847 14:41:44 -- common/autotest_common.sh@1076 -- # kill 69876 00:22:36.847 14:41:44 -- common/autotest_common.sh@1077 -- # wait 69876 00:22:36.847 [2024-04-17 14:41:44.944452] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.847 [2024-04-17 14:41:44.944982] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.847 [2024-04-17 14:41:44.945317] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.847 [2024-04-17 14:41:44.945791] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.949110] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.949327] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.949514] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.949712] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.952616] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.952817] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.953001] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.953132] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.956179] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.956388] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.956660] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 [2024-04-17 14:41:44.956822] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70885) is not found. Dropping the request. 00:22:36.848 14:41:45 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:22:36.848 14:41:45 -- common/autotest_common.sh@1083 -- # echo 2 00:22:36.848 14:41:45 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:22:36.848 14:41:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:36.848 14:41:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:36.848 14:41:45 -- common/autotest_common.sh@10 -- # set +x 00:22:36.848 ************************************ 00:22:36.848 START TEST bdev_nvme_reset_stuck_adm_cmd 00:22:36.848 ************************************ 00:22:36.848 14:41:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:22:37.107 * Looking for test storage... 00:22:37.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:22:37.107 14:41:45 -- common/autotest_common.sh@1510 -- # bdfs=() 00:22:37.107 14:41:45 -- common/autotest_common.sh@1510 -- # local bdfs 00:22:37.107 14:41:45 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:22:37.107 14:41:45 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:22:37.107 14:41:45 -- common/autotest_common.sh@1499 -- # bdfs=() 00:22:37.107 14:41:45 -- common/autotest_common.sh@1499 -- # local bdfs 00:22:37.107 14:41:45 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:37.107 14:41:45 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:37.107 14:41:45 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:22:37.107 14:41:45 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:22:37.107 14:41:45 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:37.107 14:41:45 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:22:37.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=71179 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 71179 00:22:37.107 14:41:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:22:37.107 14:41:45 -- common/autotest_common.sh@817 -- # '[' -z 71179 ']' 00:22:37.107 14:41:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.107 14:41:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:37.107 14:41:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.107 14:41:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:37.107 14:41:45 -- common/autotest_common.sh@10 -- # set +x 00:22:37.107 [2024-04-17 14:41:45.695005] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:22:37.107 [2024-04-17 14:41:45.695409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71179 ] 00:22:37.365 [2024-04-17 14:41:45.896845] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.624 [2024-04-17 14:41:46.156133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.624 [2024-04-17 14:41:46.157524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.624 [2024-04-17 14:41:46.157589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.624 [2024-04-17 14:41:46.157628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.999 14:41:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:38.999 14:41:47 -- common/autotest_common.sh@850 -- # return 0 00:22:38.999 14:41:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:22:38.999 14:41:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.999 14:41:47 -- common/autotest_common.sh@10 -- # set +x 00:22:38.999 nvme0n1 00:22:38.999 14:41:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.999 14:41:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:22:38.999 14:41:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_K5aT9.txt 00:22:38.999 14:41:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:22:38.999 14:41:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.999 14:41:47 -- common/autotest_common.sh@10 -- # set +x 00:22:38.999 true 00:22:38.999 14:41:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.999 14:41:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:22:38.999 14:41:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1713364907 00:22:38.999 14:41:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=71213 00:22:38.999 14:41:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:38.999 14:41:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:22:38.999 14:41:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:40.900 14:41:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.900 14:41:49 -- common/autotest_common.sh@10 -- # set +x 00:22:40.900 [2024-04-17 14:41:49.327348] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:22:40.900 [2024-04-17 14:41:49.327825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:40.900 [2024-04-17 14:41:49.327957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:22:40.900 [2024-04-17 14:41:49.328110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:40.900 [2024-04-17 14:41:49.329702] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:40.900 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 71213 00:22:40.900 14:41:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 71213 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 71213 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.900 14:41:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.900 14:41:49 -- common/autotest_common.sh@10 -- # set +x 00:22:40.900 14:41:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_K5aT9.txt 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:22:40.900 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_K5aT9.txt 00:22:40.901 14:41:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 71179 00:22:40.901 14:41:49 -- common/autotest_common.sh@936 -- # '[' -z 71179 ']' 00:22:40.901 14:41:49 -- common/autotest_common.sh@940 -- # kill -0 71179 00:22:40.901 14:41:49 -- common/autotest_common.sh@941 -- # uname 00:22:40.901 14:41:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:40.901 14:41:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71179 00:22:40.901 killing process with pid 71179 00:22:40.901 14:41:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:40.901 14:41:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:40.901 14:41:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71179' 00:22:40.901 14:41:49 -- common/autotest_common.sh@955 -- # kill 71179 00:22:40.901 14:41:49 -- common/autotest_common.sh@960 -- # wait 71179 00:22:44.191 14:41:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:22:44.191 14:41:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:22:44.191 00:22:44.191 real 0m6.950s 00:22:44.191 user 0m23.763s 00:22:44.191 sys 0m0.727s 00:22:44.191 ************************************ 00:22:44.191 END TEST bdev_nvme_reset_stuck_adm_cmd 00:22:44.191 ************************************ 00:22:44.191 14:41:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:44.191 14:41:52 -- common/autotest_common.sh@10 -- # set +x 00:22:44.191 14:41:52 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:22:44.191 14:41:52 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:22:44.191 14:41:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:44.191 14:41:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:44.191 14:41:52 -- common/autotest_common.sh@10 -- # set +x 00:22:44.191 ************************************ 00:22:44.191 START TEST nvme_fio 00:22:44.191 ************************************ 00:22:44.191 14:41:52 -- common/autotest_common.sh@1111 -- # nvme_fio_test 00:22:44.191 14:41:52 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:22:44.191 14:41:52 -- nvme/nvme.sh@32 -- # ran_fio=false 00:22:44.191 14:41:52 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:22:44.191 14:41:52 -- common/autotest_common.sh@1499 -- # bdfs=() 00:22:44.191 14:41:52 -- common/autotest_common.sh@1499 -- # local bdfs 00:22:44.191 14:41:52 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:44.192 14:41:52 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:44.192 14:41:52 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:22:44.192 14:41:52 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:22:44.192 14:41:52 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:44.192 14:41:52 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:22:44.192 14:41:52 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:22:44.192 14:41:52 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:22:44.192 14:41:52 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:22:44.192 14:41:52 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:44.450 14:41:52 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:44.450 14:41:52 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:22:44.708 14:41:53 -- nvme/nvme.sh@41 -- # bs=4096 00:22:44.708 14:41:53 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:22:44.708 14:41:53 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:22:44.708 14:41:53 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:44.708 14:41:53 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:44.708 14:41:53 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:44.708 14:41:53 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:44.708 14:41:53 -- common/autotest_common.sh@1327 -- # shift 00:22:44.708 14:41:53 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:44.708 14:41:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.708 14:41:53 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:44.708 14:41:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:44.708 14:41:53 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:44.708 14:41:53 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:44.708 14:41:53 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:44.708 14:41:53 -- common/autotest_common.sh@1333 -- # break 00:22:44.708 14:41:53 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:44.708 14:41:53 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:22:44.967 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:44.967 fio-3.35 00:22:44.967 Starting 1 thread 00:22:48.268 00:22:48.268 test: (groupid=0, jobs=1): err= 0: pid=71368: Wed Apr 17 14:41:56 2024 00:22:48.268 read: IOPS=18.0k, BW=70.3MiB/s (73.7MB/s)(141MiB/2001msec) 00:22:48.268 slat (usec): min=4, max=262, avg= 5.89, stdev= 2.28 00:22:48.268 clat (usec): min=244, max=9267, avg=3537.83, stdev=525.13 00:22:48.268 lat (usec): min=250, max=9273, avg=3543.72, stdev=525.81 00:22:48.268 clat percentiles (usec): 00:22:48.268 | 1.00th=[ 2343], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3228], 00:22:48.268 | 30.00th=[ 3294], 40.00th=[ 3392], 50.00th=[ 3490], 60.00th=[ 3556], 00:22:48.268 | 70.00th=[ 3621], 80.00th=[ 3884], 90.00th=[ 4080], 95.00th=[ 4178], 00:22:48.268 | 99.00th=[ 4883], 99.50th=[ 6718], 99.90th=[ 8848], 99.95th=[ 9110], 00:22:48.268 | 99.99th=[ 9110] 00:22:48.268 bw ( KiB/s): min=67680, max=75768, per=100.00%, avg=72194.67, stdev=4125.35, samples=3 00:22:48.268 iops : min=16920, max=18942, avg=18048.67, stdev=1031.34, samples=3 00:22:48.268 write: IOPS=18.0k, BW=70.4MiB/s (73.8MB/s)(141MiB/2001msec); 0 zone resets 00:22:48.268 slat (usec): min=4, max=799, avg= 6.04, stdev= 4.45 00:22:48.268 clat (usec): min=289, max=9373, avg=3540.73, stdev=511.92 00:22:48.268 lat (usec): min=296, max=9379, avg=3546.76, stdev=512.53 00:22:48.268 clat percentiles (usec): 00:22:48.268 | 1.00th=[ 2245], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3228], 00:22:48.268 | 30.00th=[ 3326], 40.00th=[ 3392], 50.00th=[ 3490], 60.00th=[ 3556], 00:22:48.268 | 70.00th=[ 3621], 80.00th=[ 3884], 90.00th=[ 4080], 95.00th=[ 4178], 00:22:48.268 | 99.00th=[ 4883], 99.50th=[ 6390], 99.90th=[ 8586], 99.95th=[ 8979], 00:22:48.268 | 99.99th=[ 9241] 00:22:48.268 bw ( KiB/s): min=67744, max=75392, per=100.00%, avg=72114.67, stdev=3939.48, samples=3 00:22:48.268 iops : min=16936, max=18848, avg=18028.67, stdev=984.87, samples=3 00:22:48.268 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:22:48.268 lat (msec) : 2=0.61%, 4=86.26%, 10=13.09% 00:22:48.268 cpu : usr=99.15%, sys=0.05%, ctx=4, majf=0, minf=605 00:22:48.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:48.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:48.268 issued rwts: total=36019,36052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:48.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:48.268 00:22:48.268 Run status group 0 (all jobs): 00:22:48.268 READ: bw=70.3MiB/s (73.7MB/s), 70.3MiB/s-70.3MiB/s (73.7MB/s-73.7MB/s), io=141MiB (148MB), run=2001-2001msec 00:22:48.268 WRITE: bw=70.4MiB/s (73.8MB/s), 70.4MiB/s-70.4MiB/s (73.8MB/s-73.8MB/s), io=141MiB (148MB), run=2001-2001msec 00:22:48.268 ----------------------------------------------------- 00:22:48.268 Suppressions used: 00:22:48.268 count bytes template 00:22:48.268 1 32 /usr/src/fio/parse.c 00:22:48.268 1 8 libtcmalloc_minimal.so 00:22:48.268 ----------------------------------------------------- 00:22:48.268 00:22:48.268 14:41:56 -- nvme/nvme.sh@44 -- # ran_fio=true 00:22:48.268 14:41:56 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:22:48.268 14:41:56 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:22:48.268 14:41:56 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:22:48.526 14:41:57 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:22:48.526 14:41:57 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:22:49.092 14:41:57 -- nvme/nvme.sh@41 -- # bs=4096 00:22:49.092 14:41:57 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:22:49.092 14:41:57 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:22:49.092 14:41:57 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:49.092 14:41:57 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:49.092 14:41:57 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:49.092 14:41:57 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:49.092 14:41:57 -- common/autotest_common.sh@1327 -- # shift 00:22:49.092 14:41:57 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:49.092 14:41:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.092 14:41:57 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:49.092 14:41:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:49.092 14:41:57 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:49.092 14:41:57 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:49.092 14:41:57 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:49.092 14:41:57 -- common/autotest_common.sh@1333 -- # break 00:22:49.092 14:41:57 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:49.092 14:41:57 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:22:49.092 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:49.092 fio-3.35 00:22:49.092 Starting 1 thread 00:22:53.327 00:22:53.327 test: (groupid=0, jobs=1): err= 0: pid=71433: Wed Apr 17 14:42:01 2024 00:22:53.327 read: IOPS=18.1k, BW=70.7MiB/s (74.1MB/s)(141MiB/2001msec) 00:22:53.327 slat (nsec): min=4439, max=66634, avg=5789.87, stdev=1731.83 00:22:53.327 clat (usec): min=269, max=9227, avg=3521.07, stdev=707.52 00:22:53.327 lat (usec): min=274, max=9240, avg=3526.86, stdev=708.37 00:22:53.327 clat percentiles (usec): 00:22:53.327 | 1.00th=[ 2507], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3130], 00:22:53.327 | 30.00th=[ 3195], 40.00th=[ 3228], 50.00th=[ 3261], 60.00th=[ 3326], 00:22:53.327 | 70.00th=[ 3458], 80.00th=[ 3949], 90.00th=[ 4178], 95.00th=[ 5014], 00:22:53.327 | 99.00th=[ 6259], 99.50th=[ 6915], 99.90th=[ 8356], 99.95th=[ 8979], 00:22:53.327 | 99.99th=[ 9241] 00:22:53.327 bw ( KiB/s): min=71080, max=76768, per=100.00%, avg=74205.33, stdev=2885.44, samples=3 00:22:53.327 iops : min=17770, max=19192, avg=18551.33, stdev=721.36, samples=3 00:22:53.327 write: IOPS=18.1k, BW=70.8MiB/s (74.2MB/s)(142MiB/2001msec); 0 zone resets 00:22:53.327 slat (usec): min=4, max=100, avg= 5.92, stdev= 1.90 00:22:53.327 clat (usec): min=234, max=9242, avg=3522.96, stdev=695.46 00:22:53.327 lat (usec): min=239, max=9254, avg=3528.88, stdev=696.31 00:22:53.327 clat percentiles (usec): 00:22:53.327 | 1.00th=[ 2573], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3130], 00:22:53.327 | 30.00th=[ 3195], 40.00th=[ 3228], 50.00th=[ 3294], 60.00th=[ 3359], 00:22:53.327 | 70.00th=[ 3458], 80.00th=[ 3949], 90.00th=[ 4178], 95.00th=[ 4948], 00:22:53.327 | 99.00th=[ 6259], 99.50th=[ 6915], 99.90th=[ 8356], 99.95th=[ 8848], 00:22:53.327 | 99.99th=[ 9110] 00:22:53.327 bw ( KiB/s): min=71128, max=76720, per=100.00%, avg=74234.67, stdev=2847.31, samples=3 00:22:53.327 iops : min=17782, max=19180, avg=18558.67, stdev=711.83, samples=3 00:22:53.327 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:22:53.327 lat (msec) : 2=0.42%, 4=82.90%, 10=16.62% 00:22:53.327 cpu : usr=99.15%, sys=0.10%, ctx=9, majf=0, minf=605 00:22:53.327 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:53.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:53.327 issued rwts: total=36201,36250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.327 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:53.327 00:22:53.327 Run status group 0 (all jobs): 00:22:53.327 READ: bw=70.7MiB/s (74.1MB/s), 70.7MiB/s-70.7MiB/s (74.1MB/s-74.1MB/s), io=141MiB (148MB), run=2001-2001msec 00:22:53.327 WRITE: bw=70.8MiB/s (74.2MB/s), 70.8MiB/s-70.8MiB/s (74.2MB/s-74.2MB/s), io=142MiB (148MB), run=2001-2001msec 00:22:53.327 ----------------------------------------------------- 00:22:53.327 Suppressions used: 00:22:53.327 count bytes template 00:22:53.327 1 32 /usr/src/fio/parse.c 00:22:53.327 1 8 libtcmalloc_minimal.so 00:22:53.327 ----------------------------------------------------- 00:22:53.327 00:22:53.327 14:42:01 -- nvme/nvme.sh@44 -- # ran_fio=true 00:22:53.327 14:42:01 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:22:53.327 14:42:01 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:22:53.327 14:42:01 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:22:53.327 14:42:01 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:22:53.327 14:42:01 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:22:53.585 14:42:01 -- nvme/nvme.sh@41 -- # bs=4096 00:22:53.585 14:42:01 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:22:53.585 14:42:01 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:22:53.585 14:42:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:53.585 14:42:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:53.585 14:42:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:53.585 14:42:01 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:53.585 14:42:01 -- common/autotest_common.sh@1327 -- # shift 00:22:53.585 14:42:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:53.585 14:42:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:53.585 14:42:01 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:53.585 14:42:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:53.585 14:42:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:53.585 14:42:01 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:53.585 14:42:01 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:53.585 14:42:01 -- common/autotest_common.sh@1333 -- # break 00:22:53.585 14:42:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:53.585 14:42:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:22:53.585 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:53.585 fio-3.35 00:22:53.585 Starting 1 thread 00:22:57.772 00:22:57.772 test: (groupid=0, jobs=1): err= 0: pid=71495: Wed Apr 17 14:42:05 2024 00:22:57.772 read: IOPS=19.2k, BW=75.2MiB/s (78.8MB/s)(150MiB/2001msec) 00:22:57.772 slat (nsec): min=4393, max=69240, avg=5418.78, stdev=1470.09 00:22:57.772 clat (usec): min=392, max=10917, avg=3308.44, stdev=606.79 00:22:57.772 lat (usec): min=399, max=10960, avg=3313.86, stdev=607.42 00:22:57.772 clat percentiles (usec): 00:22:57.772 | 1.00th=[ 2212], 5.00th=[ 2737], 10.00th=[ 2999], 20.00th=[ 3064], 00:22:57.772 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3228], 00:22:57.772 | 70.00th=[ 3326], 80.00th=[ 3425], 90.00th=[ 3752], 95.00th=[ 4080], 00:22:57.772 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 9765], 99.95th=[10290], 00:22:57.772 | 99.99th=[10814] 00:22:57.772 bw ( KiB/s): min=75280, max=79944, per=100.00%, avg=78269.33, stdev=2595.09, samples=3 00:22:57.772 iops : min=18820, max=19986, avg=19567.33, stdev=648.77, samples=3 00:22:57.772 write: IOPS=19.2k, BW=75.1MiB/s (78.7MB/s)(150MiB/2001msec); 0 zone resets 00:22:57.772 slat (nsec): min=4537, max=47360, avg=5725.48, stdev=1509.55 00:22:57.772 clat (usec): min=337, max=10957, avg=3316.58, stdev=622.44 00:22:57.772 lat (usec): min=345, max=10963, avg=3322.30, stdev=623.05 00:22:57.772 clat percentiles (usec): 00:22:57.772 | 1.00th=[ 2245], 5.00th=[ 2737], 10.00th=[ 2999], 20.00th=[ 3064], 00:22:57.772 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3261], 00:22:57.772 | 70.00th=[ 3326], 80.00th=[ 3425], 90.00th=[ 3752], 95.00th=[ 4080], 00:22:57.772 | 99.00th=[ 5800], 99.50th=[ 6456], 99.90th=[ 9896], 99.95th=[10290], 00:22:57.772 | 99.99th=[10814] 00:22:57.772 bw ( KiB/s): min=75216, max=80192, per=100.00%, avg=78376.00, stdev=2746.80, samples=3 00:22:57.772 iops : min=18804, max=20048, avg=19594.00, stdev=686.70, samples=3 00:22:57.772 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:22:57.772 lat (msec) : 2=0.50%, 4=93.75%, 10=5.65%, 20=0.08% 00:22:57.772 cpu : usr=99.15%, sys=0.15%, ctx=6, majf=0, minf=605 00:22:57.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:57.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:57.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:57.772 issued rwts: total=38507,38458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:57.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:57.772 00:22:57.772 Run status group 0 (all jobs): 00:22:57.772 READ: bw=75.2MiB/s (78.8MB/s), 75.2MiB/s-75.2MiB/s (78.8MB/s-78.8MB/s), io=150MiB (158MB), run=2001-2001msec 00:22:57.772 WRITE: bw=75.1MiB/s (78.7MB/s), 75.1MiB/s-75.1MiB/s (78.7MB/s-78.7MB/s), io=150MiB (158MB), run=2001-2001msec 00:22:57.772 ----------------------------------------------------- 00:22:57.772 Suppressions used: 00:22:57.772 count bytes template 00:22:57.772 1 32 /usr/src/fio/parse.c 00:22:57.772 1 8 libtcmalloc_minimal.so 00:22:57.772 ----------------------------------------------------- 00:22:57.772 00:22:57.772 14:42:05 -- nvme/nvme.sh@44 -- # ran_fio=true 00:22:57.772 14:42:05 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:22:57.772 14:42:05 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:22:57.772 14:42:05 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:22:57.772 14:42:06 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:22:57.772 14:42:06 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:22:58.032 14:42:06 -- nvme/nvme.sh@41 -- # bs=4096 00:22:58.032 14:42:06 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:22:58.032 14:42:06 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:22:58.032 14:42:06 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:58.032 14:42:06 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:58.032 14:42:06 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:58.032 14:42:06 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:58.032 14:42:06 -- common/autotest_common.sh@1327 -- # shift 00:22:58.032 14:42:06 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:58.032 14:42:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:58.032 14:42:06 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:58.032 14:42:06 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:58.032 14:42:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:58.032 14:42:06 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:58.032 14:42:06 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:58.032 14:42:06 -- common/autotest_common.sh@1333 -- # break 00:22:58.032 14:42:06 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:58.032 14:42:06 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:22:58.297 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:58.297 fio-3.35 00:22:58.297 Starting 1 thread 00:23:03.570 00:23:03.570 test: (groupid=0, jobs=1): err= 0: pid=71556: Wed Apr 17 14:42:11 2024 00:23:03.570 read: IOPS=19.2k, BW=74.9MiB/s (78.5MB/s)(150MiB/2001msec) 00:23:03.571 slat (nsec): min=4481, max=68734, avg=5566.67, stdev=1470.59 00:23:03.571 clat (usec): min=211, max=9058, avg=3324.65, stdev=541.03 00:23:03.571 lat (usec): min=217, max=9082, avg=3330.22, stdev=541.55 00:23:03.571 clat percentiles (usec): 00:23:03.571 | 1.00th=[ 1860], 5.00th=[ 2835], 10.00th=[ 2999], 20.00th=[ 3064], 00:23:03.571 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3195], 60.00th=[ 3228], 00:23:03.571 | 70.00th=[ 3326], 80.00th=[ 3556], 90.00th=[ 4146], 95.00th=[ 4359], 00:23:03.571 | 99.00th=[ 5080], 99.50th=[ 5407], 99.90th=[ 5932], 99.95th=[ 6849], 00:23:03.571 | 99.99th=[ 8717] 00:23:03.571 bw ( KiB/s): min=69724, max=78992, per=98.90%, avg=75820.00, stdev=5280.75, samples=3 00:23:03.571 iops : min=17431, max=19748, avg=18955.00, stdev=1320.19, samples=3 00:23:03.571 write: IOPS=19.1k, BW=74.8MiB/s (78.4MB/s)(150MiB/2001msec); 0 zone resets 00:23:03.571 slat (nsec): min=4550, max=78876, avg=5699.93, stdev=1487.25 00:23:03.571 clat (usec): min=256, max=8825, avg=3332.11, stdev=548.91 00:23:03.571 lat (usec): min=262, max=8839, avg=3337.81, stdev=549.45 00:23:03.571 clat percentiles (usec): 00:23:03.571 | 1.00th=[ 1844], 5.00th=[ 2835], 10.00th=[ 2999], 20.00th=[ 3064], 00:23:03.571 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3195], 60.00th=[ 3228], 00:23:03.571 | 70.00th=[ 3326], 80.00th=[ 3589], 90.00th=[ 4178], 95.00th=[ 4359], 00:23:03.571 | 99.00th=[ 5145], 99.50th=[ 5473], 99.90th=[ 5866], 99.95th=[ 7111], 00:23:03.571 | 99.99th=[ 8455] 00:23:03.571 bw ( KiB/s): min=69684, max=79136, per=99.10%, avg=75900.00, stdev=5384.74, samples=3 00:23:03.571 iops : min=17421, max=19784, avg=18975.00, stdev=1346.18, samples=3 00:23:03.571 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.06% 00:23:03.571 lat (msec) : 2=1.27%, 4=85.76%, 10=12.89% 00:23:03.571 cpu : usr=99.15%, sys=0.10%, ctx=23, majf=0, minf=603 00:23:03.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:03.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:03.571 issued rwts: total=38352,38314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:03.571 00:23:03.571 Run status group 0 (all jobs): 00:23:03.571 READ: bw=74.9MiB/s (78.5MB/s), 74.9MiB/s-74.9MiB/s (78.5MB/s-78.5MB/s), io=150MiB (157MB), run=2001-2001msec 00:23:03.571 WRITE: bw=74.8MiB/s (78.4MB/s), 74.8MiB/s-74.8MiB/s (78.4MB/s-78.4MB/s), io=150MiB (157MB), run=2001-2001msec 00:23:03.571 ----------------------------------------------------- 00:23:03.571 Suppressions used: 00:23:03.571 count bytes template 00:23:03.571 1 32 /usr/src/fio/parse.c 00:23:03.571 1 8 libtcmalloc_minimal.so 00:23:03.571 ----------------------------------------------------- 00:23:03.571 00:23:03.571 14:42:11 -- nvme/nvme.sh@44 -- # ran_fio=true 00:23:03.571 ************************************ 00:23:03.571 END TEST nvme_fio 00:23:03.571 ************************************ 00:23:03.571 14:42:11 -- nvme/nvme.sh@46 -- # true 00:23:03.571 00:23:03.571 real 0m19.409s 00:23:03.571 user 0m15.628s 00:23:03.571 sys 0m2.355s 00:23:03.571 14:42:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:03.571 14:42:11 -- common/autotest_common.sh@10 -- # set +x 00:23:03.571 00:23:03.571 real 1m37.187s 00:23:03.571 user 3m50.477s 00:23:03.571 sys 0m21.784s 00:23:03.571 14:42:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:03.571 ************************************ 00:23:03.571 END TEST nvme 00:23:03.571 ************************************ 00:23:03.571 14:42:11 -- common/autotest_common.sh@10 -- # set +x 00:23:03.571 14:42:11 -- spdk/autotest.sh@212 -- # [[ 0 -eq 1 ]] 00:23:03.571 14:42:11 -- spdk/autotest.sh@216 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:23:03.571 14:42:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:03.571 14:42:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:03.571 14:42:11 -- common/autotest_common.sh@10 -- # set +x 00:23:03.571 ************************************ 00:23:03.571 START TEST nvme_scc 00:23:03.571 ************************************ 00:23:03.571 14:42:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:23:03.571 * Looking for test storage... 00:23:03.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:03.571 14:42:12 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:03.571 14:42:12 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:03.571 14:42:12 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:23:03.571 14:42:12 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:03.571 14:42:12 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:03.571 14:42:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.571 14:42:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.571 14:42:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.571 14:42:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.571 14:42:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.571 14:42:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.571 14:42:12 -- paths/export.sh@5 -- # export PATH 00:23:03.571 14:42:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.571 14:42:12 -- nvme/functions.sh@10 -- # ctrls=() 00:23:03.571 14:42:12 -- nvme/functions.sh@10 -- # declare -A ctrls 00:23:03.571 14:42:12 -- nvme/functions.sh@11 -- # nvmes=() 00:23:03.571 14:42:12 -- nvme/functions.sh@11 -- # declare -A nvmes 00:23:03.571 14:42:12 -- nvme/functions.sh@12 -- # bdfs=() 00:23:03.571 14:42:12 -- nvme/functions.sh@12 -- # declare -A bdfs 00:23:03.571 14:42:12 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:23:03.571 14:42:12 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:23:03.571 14:42:12 -- nvme/functions.sh@14 -- # nvme_name= 00:23:03.571 14:42:12 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:03.571 14:42:12 -- nvme/nvme_scc.sh@12 -- # uname 00:23:03.571 14:42:12 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:23:03.571 14:42:12 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:23:03.571 14:42:12 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:04.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:04.136 Waiting for block devices as requested 00:23:04.394 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:04.394 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:04.394 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:23:04.653 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:23:09.929 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:23:09.929 14:42:18 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:23:09.929 14:42:18 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:23:09.929 14:42:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:09.929 14:42:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:23:09.929 14:42:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:23:09.929 14:42:18 -- scripts/common.sh@15 -- # local i 00:23:09.929 14:42:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:23:09.929 14:42:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:09.929 14:42:18 -- scripts/common.sh@24 -- # return 0 00:23:09.929 14:42:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:23:09.929 14:42:18 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:23:09.929 14:42:18 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@18 -- # shift 00:23:09.929 14:42:18 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:23:09.929 14:42:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:23:09.929 14:42:18 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.929 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.929 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.930 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.930 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:23:09.930 14:42:18 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:23:09.931 14:42:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:09.931 14:42:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:23:09.931 14:42:18 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:23:09.931 14:42:18 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@18 -- # shift 00:23:09.931 14:42:18 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.931 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.931 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:09.931 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:23:09.932 14:42:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:23:09.932 14:42:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:23:09.932 14:42:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:23:09.932 14:42:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:23:09.932 14:42:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:09.932 14:42:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:23:09.932 14:42:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:23:09.932 14:42:18 -- scripts/common.sh@15 -- # local i 00:23:09.932 14:42:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:23:09.932 14:42:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:09.932 14:42:18 -- scripts/common.sh@24 -- # return 0 00:23:09.932 14:42:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:23:09.932 14:42:18 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:23:09.932 14:42:18 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@18 -- # shift 00:23:09.932 14:42:18 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.932 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.932 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:23:09.932 14:42:18 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:23:09.933 14:42:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:09.933 14:42:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:23:09.933 14:42:18 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:23:09.933 14:42:18 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@18 -- # shift 00:23:09.933 14:42:18 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.933 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:23:09.933 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.933 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:23:09.934 14:42:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:23:09.934 14:42:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:23:09.934 14:42:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:23:09.934 14:42:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:23:09.934 14:42:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:09.934 14:42:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:23:09.934 14:42:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:23:09.934 14:42:18 -- scripts/common.sh@15 -- # local i 00:23:09.934 14:42:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:23:09.934 14:42:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:09.934 14:42:18 -- scripts/common.sh@24 -- # return 0 00:23:09.934 14:42:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:23:09.934 14:42:18 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:23:09.934 14:42:18 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@18 -- # shift 00:23:09.934 14:42:18 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.934 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.934 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:23:09.934 14:42:18 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.935 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.935 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:23:09.935 14:42:18 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:23:09.936 14:42:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:09.936 14:42:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:23:09.936 14:42:18 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:23:09.936 14:42:18 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@18 -- # shift 00:23:09.936 14:42:18 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.936 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:23:09.936 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:23:09.936 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:23:09.937 14:42:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:09.937 14:42:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:23:09.937 14:42:18 -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:23:09.937 14:42:18 -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@18 -- # shift 00:23:09.937 14:42:18 -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.937 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.937 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:09.937 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:23:09.938 14:42:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:09.938 14:42:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:23:09.938 14:42:18 -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:23:09.938 14:42:18 -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@18 -- # shift 00:23:09.938 14:42:18 -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:09.938 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:09.938 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:09.938 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.198 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:10.198 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:10.198 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.198 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:10.198 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:10.198 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.198 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:10.198 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:10.198 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.198 14:42:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:10.198 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:10.198 14:42:18 -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.198 14:42:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:23:10.198 14:42:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:23:10.198 14:42:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:23:10.198 14:42:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:23:10.198 14:42:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:23:10.198 14:42:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:10.198 14:42:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:23:10.198 14:42:18 -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:23:10.198 14:42:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:23:10.198 14:42:18 -- scripts/common.sh@15 -- # local i 00:23:10.198 14:42:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:23:10.198 14:42:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:10.198 14:42:18 -- scripts/common.sh@24 -- # return 0 00:23:10.198 14:42:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:23:10.198 14:42:18 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:23:10.198 14:42:18 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:23:10.198 14:42:18 -- nvme/functions.sh@18 -- # shift 00:23:10.198 14:42:18 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.198 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.199 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:23:10.199 14:42:18 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.199 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.200 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.200 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:23:10.200 14:42:18 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.201 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.201 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.201 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.201 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.201 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.201 14:42:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.201 14:42:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.201 14:42:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:23:10.201 14:42:18 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # IFS=: 00:23:10.201 14:42:18 -- nvme/functions.sh@21 -- # read -r reg val 00:23:10.201 14:42:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:23:10.201 14:42:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:23:10.201 14:42:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:23:10.201 14:42:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:23:10.201 14:42:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:23:10.201 14:42:18 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:23:10.201 14:42:18 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:23:10.201 14:42:18 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:23:10.201 14:42:18 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:23:10.201 14:42:18 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:23:10.201 14:42:18 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:23:10.201 14:42:18 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:23:10.201 14:42:18 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:23:10.201 14:42:18 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:23:10.201 14:42:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:23:10.201 14:42:18 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:23:10.201 14:42:18 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:23:10.201 14:42:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:23:10.201 14:42:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:23:10.201 14:42:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:23:10.201 14:42:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:23:10.201 14:42:18 -- nvme/functions.sh@197 -- # echo nvme1 00:23:10.201 14:42:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:23:10.201 14:42:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:23:10.201 14:42:18 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:23:10.201 14:42:18 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:23:10.201 14:42:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:23:10.201 14:42:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:23:10.201 14:42:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:23:10.201 14:42:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:23:10.201 14:42:18 -- nvme/functions.sh@197 -- # echo nvme0 00:23:10.201 14:42:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:23:10.201 14:42:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:23:10.201 14:42:18 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:23:10.201 14:42:18 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:23:10.201 14:42:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:23:10.201 14:42:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:23:10.201 14:42:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:23:10.201 14:42:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:23:10.201 14:42:18 -- nvme/functions.sh@197 -- # echo nvme3 00:23:10.201 14:42:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:23:10.201 14:42:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:23:10.201 14:42:18 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:23:10.201 14:42:18 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:23:10.201 14:42:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:23:10.201 14:42:18 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:23:10.201 14:42:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:10.201 14:42:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:23:10.201 14:42:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:23:10.201 14:42:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:23:10.201 14:42:18 -- nvme/functions.sh@197 -- # echo nvme2 00:23:10.201 14:42:18 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:23:10.201 14:42:18 -- nvme/functions.sh@206 -- # echo nvme1 00:23:10.201 14:42:18 -- nvme/functions.sh@207 -- # return 0 00:23:10.201 14:42:18 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:23:10.201 14:42:18 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:23:10.201 14:42:18 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:10.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:11.042 lsblk: /dev/nvme3c3n1: not a block device 00:23:11.306 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:11.306 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:11.565 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:11.565 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:11.565 14:42:20 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:23:11.565 14:42:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:23:11.565 14:42:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:11.565 14:42:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.565 ************************************ 00:23:11.565 START TEST nvme_simple_copy 00:23:11.565 ************************************ 00:23:11.565 14:42:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:23:12.150 Initializing NVMe Controllers 00:23:12.150 Attaching to 0000:00:10.0 00:23:12.150 Controller supports SCC. Attached to 0000:00:10.0 00:23:12.150 Namespace ID: 1 size: 6GB 00:23:12.150 Initialization complete. 00:23:12.150 00:23:12.150 Controller QEMU NVMe Ctrl (12340 ) 00:23:12.150 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:23:12.150 Namespace Block Size:4096 00:23:12.150 Writing LBAs 0 to 63 with Random Data 00:23:12.150 Copied LBAs from 0 - 63 to the Destination LBA 256 00:23:12.150 LBAs matching Written Data: 64 00:23:12.150 00:23:12.150 real 0m0.331s 00:23:12.150 user 0m0.122s 00:23:12.150 sys 0m0.107s 00:23:12.150 14:42:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:12.150 ************************************ 00:23:12.150 END TEST nvme_simple_copy 00:23:12.150 14:42:20 -- common/autotest_common.sh@10 -- # set +x 00:23:12.150 ************************************ 00:23:12.150 00:23:12.150 real 0m8.502s 00:23:12.150 user 0m1.294s 00:23:12.150 sys 0m2.156s 00:23:12.150 14:42:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:12.150 14:42:20 -- common/autotest_common.sh@10 -- # set +x 00:23:12.150 ************************************ 00:23:12.150 END TEST nvme_scc 00:23:12.150 ************************************ 00:23:12.150 14:42:20 -- spdk/autotest.sh@218 -- # [[ 0 -eq 1 ]] 00:23:12.150 14:42:20 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:23:12.150 14:42:20 -- spdk/autotest.sh@224 -- # [[ '' -eq 1 ]] 00:23:12.150 14:42:20 -- spdk/autotest.sh@227 -- # [[ 1 -eq 1 ]] 00:23:12.150 14:42:20 -- spdk/autotest.sh@228 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:23:12.150 14:42:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:12.150 14:42:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:12.150 14:42:20 -- common/autotest_common.sh@10 -- # set +x 00:23:12.150 ************************************ 00:23:12.150 START TEST nvme_fdp 00:23:12.150 ************************************ 00:23:12.150 14:42:20 -- common/autotest_common.sh@1111 -- # test/nvme/nvme_fdp.sh 00:23:12.150 * Looking for test storage... 00:23:12.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:12.150 14:42:20 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:12.150 14:42:20 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:12.150 14:42:20 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:23:12.150 14:42:20 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:12.410 14:42:20 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:12.410 14:42:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.410 14:42:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.410 14:42:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.410 14:42:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.410 14:42:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.410 14:42:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.410 14:42:20 -- paths/export.sh@5 -- # export PATH 00:23:12.410 14:42:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.410 14:42:20 -- nvme/functions.sh@10 -- # ctrls=() 00:23:12.410 14:42:20 -- nvme/functions.sh@10 -- # declare -A ctrls 00:23:12.410 14:42:20 -- nvme/functions.sh@11 -- # nvmes=() 00:23:12.410 14:42:20 -- nvme/functions.sh@11 -- # declare -A nvmes 00:23:12.411 14:42:20 -- nvme/functions.sh@12 -- # bdfs=() 00:23:12.411 14:42:20 -- nvme/functions.sh@12 -- # declare -A bdfs 00:23:12.411 14:42:20 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:23:12.411 14:42:20 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:23:12.411 14:42:20 -- nvme/functions.sh@14 -- # nvme_name= 00:23:12.411 14:42:20 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:12.411 14:42:20 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:12.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:12.943 Waiting for block devices as requested 00:23:12.943 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:12.943 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:13.202 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:23:13.202 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:23:18.515 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:23:18.515 14:42:26 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:23:18.515 14:42:26 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:23:18.515 14:42:26 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:18.515 14:42:26 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:23:18.515 14:42:26 -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:23:18.515 14:42:26 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:23:18.515 14:42:26 -- scripts/common.sh@15 -- # local i 00:23:18.515 14:42:26 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:23:18.515 14:42:26 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:18.515 14:42:26 -- scripts/common.sh@24 -- # return 0 00:23:18.515 14:42:26 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:23:18.515 14:42:26 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:23:18.516 14:42:26 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@18 -- # shift 00:23:18.516 14:42:26 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.516 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.516 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.516 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:23:18.517 14:42:26 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.517 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.517 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:23:18.518 14:42:26 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:18.518 14:42:26 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:23:18.518 14:42:26 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:23:18.518 14:42:26 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@18 -- # shift 00:23:18.518 14:42:26 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.518 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:23:18.518 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.518 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:18.519 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.519 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.519 14:42:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:23:18.520 14:42:26 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:23:18.520 14:42:26 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:23:18.520 14:42:26 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:23:18.520 14:42:26 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:23:18.520 14:42:26 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:18.520 14:42:26 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:23:18.520 14:42:26 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:23:18.520 14:42:26 -- scripts/common.sh@15 -- # local i 00:23:18.520 14:42:26 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:23:18.520 14:42:26 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:18.520 14:42:26 -- scripts/common.sh@24 -- # return 0 00:23:18.520 14:42:26 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:23:18.520 14:42:26 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:23:18.520 14:42:26 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@18 -- # shift 00:23:18.520 14:42:26 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:23:18.520 14:42:26 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:23:18.520 14:42:26 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.520 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.520 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:23:18.520 14:42:27 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.521 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:23:18.521 14:42:27 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:23:18.521 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:23:18.522 14:42:27 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:23:18.522 14:42:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:18.522 14:42:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:23:18.522 14:42:27 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:23:18.522 14:42:27 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@18 -- # shift 00:23:18.522 14:42:27 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.522 14:42:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:23:18.522 14:42:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.522 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.523 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.523 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:23:18.523 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:23:18.524 14:42:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:23:18.524 14:42:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:23:18.524 14:42:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:23:18.524 14:42:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:23:18.524 14:42:27 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:18.524 14:42:27 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:23:18.524 14:42:27 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:23:18.524 14:42:27 -- scripts/common.sh@15 -- # local i 00:23:18.524 14:42:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:23:18.524 14:42:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:18.524 14:42:27 -- scripts/common.sh@24 -- # return 0 00:23:18.524 14:42:27 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:23:18.524 14:42:27 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:23:18.524 14:42:27 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@18 -- # shift 00:23:18.524 14:42:27 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.524 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.524 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:23:18.524 14:42:27 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.525 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.525 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.525 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.833 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:23:18.833 14:42:27 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.833 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:23:18.834 14:42:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:18.834 14:42:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:23:18.834 14:42:27 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:23:18.834 14:42:27 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@18 -- # shift 00:23:18.834 14:42:27 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.834 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:23:18.834 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.834 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:18.835 14:42:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:23:18.835 14:42:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:18.835 14:42:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:23:18.835 14:42:27 -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:23:18.835 14:42:27 -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:23:18.835 14:42:27 -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@18 -- # shift 00:23:18.835 14:42:27 -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.835 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.835 14:42:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:23:18.836 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.836 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.836 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:23:18.837 14:42:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:18.837 14:42:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:23:18.837 14:42:27 -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:23:18.837 14:42:27 -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@18 -- # shift 00:23:18.837 14:42:27 -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:23:18.837 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.837 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.837 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:18.838 14:42:27 -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.838 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.838 14:42:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:23:18.838 14:42:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:23:18.838 14:42:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:23:18.838 14:42:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:23:18.838 14:42:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:23:18.838 14:42:27 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:18.838 14:42:27 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:23:18.838 14:42:27 -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:23:18.838 14:42:27 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:23:18.838 14:42:27 -- scripts/common.sh@15 -- # local i 00:23:18.838 14:42:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:23:18.838 14:42:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:18.839 14:42:27 -- scripts/common.sh@24 -- # return 0 00:23:18.839 14:42:27 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:23:18.839 14:42:27 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:23:18.839 14:42:27 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@18 -- # shift 00:23:18.839 14:42:27 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.839 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:23:18.839 14:42:27 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:23:18.839 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.840 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:23:18.840 14:42:27 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.840 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:23:18.841 14:42:27 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # IFS=: 00:23:18.841 14:42:27 -- nvme/functions.sh@21 -- # read -r reg val 00:23:18.841 14:42:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:23:18.841 14:42:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:23:18.841 14:42:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:23:18.841 14:42:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:23:18.841 14:42:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:23:18.841 14:42:27 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:23:18.841 14:42:27 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:23:18.841 14:42:27 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:23:18.841 14:42:27 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:23:18.841 14:42:27 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:23:18.841 14:42:27 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:23:18.841 14:42:27 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:23:18.841 14:42:27 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:23:18.841 14:42:27 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:23:18.841 14:42:27 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:23:18.841 14:42:27 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:23:18.841 14:42:27 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:23:18.841 14:42:27 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:23:18.841 14:42:27 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:23:18.841 14:42:27 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:23:18.841 14:42:27 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:23:18.841 14:42:27 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@76 -- # echo 0x8000 00:23:18.841 14:42:27 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:23:18.841 14:42:27 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:23:18.841 14:42:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:23:18.841 14:42:27 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:23:18.841 14:42:27 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:23:18.841 14:42:27 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:23:18.841 14:42:27 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:23:18.841 14:42:27 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:23:18.841 14:42:27 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:23:18.841 14:42:27 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:23:18.841 14:42:27 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@76 -- # echo 0x8000 00:23:18.841 14:42:27 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:23:18.841 14:42:27 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:23:18.841 14:42:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:23:18.841 14:42:27 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:23:18.841 14:42:27 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:23:18.841 14:42:27 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:23:18.841 14:42:27 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:23:18.841 14:42:27 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:23:18.841 14:42:27 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:23:18.841 14:42:27 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:23:18.841 14:42:27 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:23:18.841 14:42:27 -- nvme/functions.sh@76 -- # echo 0x88010 00:23:18.841 14:42:27 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:23:18.841 14:42:27 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:23:18.841 14:42:27 -- nvme/functions.sh@197 -- # echo nvme3 00:23:18.841 14:42:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:23:18.841 14:42:27 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:23:18.841 14:42:27 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:23:18.841 14:42:27 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:23:18.841 14:42:27 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:23:18.841 14:42:27 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:23:18.841 14:42:27 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:23:18.842 14:42:27 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:23:18.842 14:42:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:23:18.842 14:42:27 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:23:18.842 14:42:27 -- nvme/functions.sh@76 -- # echo 0x8000 00:23:18.842 14:42:27 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:23:18.842 14:42:27 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:23:18.842 14:42:27 -- nvme/functions.sh@204 -- # trap - ERR 00:23:18.842 14:42:27 -- nvme/functions.sh@204 -- # print_backtrace 00:23:18.842 14:42:27 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:23:18.842 14:42:27 -- common/autotest_common.sh@1139 -- # return 0 00:23:18.842 14:42:27 -- nvme/functions.sh@204 -- # trap - ERR 00:23:18.842 14:42:27 -- nvme/functions.sh@204 -- # print_backtrace 00:23:18.842 14:42:27 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:23:18.842 14:42:27 -- common/autotest_common.sh@1139 -- # return 0 00:23:18.842 14:42:27 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:23:18.842 14:42:27 -- nvme/functions.sh@206 -- # echo nvme3 00:23:18.842 14:42:27 -- nvme/functions.sh@207 -- # return 0 00:23:18.842 14:42:27 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:23:18.842 14:42:27 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:13.0 00:23:18.842 14:42:27 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:19.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:19.977 lsblk: /dev/nvme3c3n1: not a block device 00:23:20.236 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:20.236 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:20.236 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:20.495 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:20.495 14:42:28 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:23:20.495 14:42:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:23:20.495 14:42:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:20.495 14:42:28 -- common/autotest_common.sh@10 -- # set +x 00:23:20.495 ************************************ 00:23:20.495 START TEST nvme_flexible_data_placement 00:23:20.495 ************************************ 00:23:20.495 14:42:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:23:20.753 Initializing NVMe Controllers 00:23:20.753 Attaching to 0000:00:13.0 00:23:20.753 Controller supports FDP Attached to 0000:00:13.0 00:23:20.753 Namespace ID: 1 Endurance Group ID: 1 00:23:20.753 Initialization complete. 00:23:20.753 00:23:20.753 ================================== 00:23:20.753 == FDP tests for Namespace: #01 == 00:23:20.753 ================================== 00:23:20.753 00:23:20.753 Get Feature: FDP: 00:23:20.753 ================= 00:23:20.754 Enabled: Yes 00:23:20.754 FDP configuration Index: 0 00:23:20.754 00:23:20.754 FDP configurations log page 00:23:20.754 =========================== 00:23:20.754 Number of FDP configurations: 1 00:23:20.754 Version: 0 00:23:20.754 Size: 112 00:23:20.754 FDP Configuration Descriptor: 0 00:23:20.754 Descriptor Size: 96 00:23:20.754 Reclaim Group Identifier format: 2 00:23:20.754 FDP Volatile Write Cache: Not Present 00:23:20.754 FDP Configuration: Valid 00:23:20.754 Vendor Specific Size: 0 00:23:20.754 Number of Reclaim Groups: 2 00:23:20.754 Number of Recalim Unit Handles: 8 00:23:20.754 Max Placement Identifiers: 128 00:23:20.754 Number of Namespaces Suppprted: 256 00:23:20.754 Reclaim unit Nominal Size: 6000000 bytes 00:23:20.754 Estimated Reclaim Unit Time Limit: Not Reported 00:23:20.754 RUH Desc #000: RUH Type: Initially Isolated 00:23:20.754 RUH Desc #001: RUH Type: Initially Isolated 00:23:20.754 RUH Desc #002: RUH Type: Initially Isolated 00:23:20.754 RUH Desc #003: RUH Type: Initially Isolated 00:23:20.754 RUH Desc #004: RUH Type: Initially Isolated 00:23:20.754 RUH Desc #005: RUH Type: Initially Isolated 00:23:20.754 RUH Desc #006: RUH Type: Initially Isolated 00:23:20.754 RUH Desc #007: RUH Type: Initially Isolated 00:23:20.754 00:23:20.754 FDP reclaim unit handle usage log page 00:23:20.754 ====================================== 00:23:20.754 Number of Reclaim Unit Handles: 8 00:23:20.754 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:23:20.754 RUH Usage Desc #001: RUH Attributes: Unused 00:23:20.754 RUH Usage Desc #002: RUH Attributes: Unused 00:23:20.754 RUH Usage Desc #003: RUH Attributes: Unused 00:23:20.754 RUH Usage Desc #004: RUH Attributes: Unused 00:23:20.754 RUH Usage Desc #005: RUH Attributes: Unused 00:23:20.754 RUH Usage Desc #006: RUH Attributes: Unused 00:23:20.754 RUH Usage Desc #007: RUH Attributes: Unused 00:23:20.754 00:23:20.754 FDP statistics log page 00:23:20.754 ======================= 00:23:20.754 Host bytes with metadata written: 801198080 00:23:20.754 Media bytes with metadata written: 801357824 00:23:20.754 Media bytes erased: 0 00:23:20.754 00:23:20.754 FDP Reclaim unit handle status 00:23:20.754 ============================== 00:23:20.754 Number of RUHS descriptors: 2 00:23:20.754 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000003eb 00:23:20.754 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:23:20.754 00:23:20.754 FDP write on placement id: 0 success 00:23:20.754 00:23:20.754 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:23:20.754 00:23:20.754 IO mgmt send: RUH update for Placement ID: #0 Success 00:23:20.754 00:23:20.754 Get Feature: FDP Events for Placement handle: #0 00:23:20.754 ======================== 00:23:20.754 Number of FDP Events: 6 00:23:20.754 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:23:20.754 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:23:20.754 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:23:20.754 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:23:20.754 FDP Event: #4 Type: Media Reallocated Enabled: No 00:23:20.754 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:23:20.754 00:23:20.754 FDP events log page 00:23:20.754 =================== 00:23:20.754 Number of FDP events: 1 00:23:20.754 FDP Event #0: 00:23:20.754 Event Type: RU Not Written to Capacity 00:23:20.754 Placement Identifier: Valid 00:23:20.754 NSID: Valid 00:23:20.754 Location: Valid 00:23:20.754 Placement Identifier: 0 00:23:20.754 Event Timestamp: 12 00:23:20.754 Namespace Identifier: 1 00:23:20.754 Reclaim Group Identifier: 0 00:23:20.754 Reclaim Unit Handle Identifier: 0 00:23:20.754 00:23:20.754 FDP test passed 00:23:21.013 00:23:21.013 real 0m0.342s 00:23:21.013 user 0m0.123s 00:23:21.013 sys 0m0.116s 00:23:21.013 14:42:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:21.013 14:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:21.013 ************************************ 00:23:21.013 END TEST nvme_flexible_data_placement 00:23:21.013 ************************************ 00:23:21.013 00:23:21.013 real 0m8.771s 00:23:21.013 user 0m1.395s 00:23:21.013 sys 0m2.318s 00:23:21.013 14:42:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:21.013 14:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:21.013 ************************************ 00:23:21.013 END TEST nvme_fdp 00:23:21.013 ************************************ 00:23:21.013 14:42:29 -- spdk/autotest.sh@231 -- # [[ '' -eq 1 ]] 00:23:21.013 14:42:29 -- spdk/autotest.sh@235 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:23:21.013 14:42:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:21.013 14:42:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:21.013 14:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:21.013 ************************************ 00:23:21.013 START TEST nvme_rpc 00:23:21.013 ************************************ 00:23:21.013 14:42:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:23:21.272 * Looking for test storage... 00:23:21.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:21.272 14:42:29 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:21.272 14:42:29 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:23:21.272 14:42:29 -- common/autotest_common.sh@1510 -- # bdfs=() 00:23:21.272 14:42:29 -- common/autotest_common.sh@1510 -- # local bdfs 00:23:21.272 14:42:29 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:23:21.272 14:42:29 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:23:21.272 14:42:29 -- common/autotest_common.sh@1499 -- # bdfs=() 00:23:21.272 14:42:29 -- common/autotest_common.sh@1499 -- # local bdfs 00:23:21.272 14:42:29 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:21.272 14:42:29 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:21.272 14:42:29 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:23:21.272 14:42:29 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:23:21.272 14:42:29 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:23:21.272 14:42:29 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:23:21.272 14:42:29 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:23:21.272 14:42:29 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72973 00:23:21.272 14:42:29 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:23:21.272 14:42:29 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:23:21.272 14:42:29 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72973 00:23:21.272 14:42:29 -- common/autotest_common.sh@817 -- # '[' -z 72973 ']' 00:23:21.272 14:42:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.272 14:42:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:21.272 14:42:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.272 14:42:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:21.272 14:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:21.272 [2024-04-17 14:42:29.834148] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:23:21.272 [2024-04-17 14:42:29.834519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72973 ] 00:23:21.531 [2024-04-17 14:42:30.004951] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:21.790 [2024-04-17 14:42:30.277518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.790 [2024-04-17 14:42:30.277573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.167 14:42:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:23.167 14:42:31 -- common/autotest_common.sh@850 -- # return 0 00:23:23.167 14:42:31 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:23:23.167 Nvme0n1 00:23:23.167 14:42:31 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:23:23.167 14:42:31 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:23:23.426 request: 00:23:23.426 { 00:23:23.426 "filename": "non_existing_file", 00:23:23.426 "bdev_name": "Nvme0n1", 00:23:23.426 "method": "bdev_nvme_apply_firmware", 00:23:23.426 "req_id": 1 00:23:23.426 } 00:23:23.426 Got JSON-RPC error response 00:23:23.426 response: 00:23:23.426 { 00:23:23.426 "code": -32603, 00:23:23.426 "message": "open file failed." 00:23:23.426 } 00:23:23.426 14:42:31 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:23:23.426 14:42:31 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:23:23.427 14:42:31 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:23:23.685 14:42:32 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:23:23.685 14:42:32 -- nvme/nvme_rpc.sh@40 -- # killprocess 72973 00:23:23.685 14:42:32 -- common/autotest_common.sh@936 -- # '[' -z 72973 ']' 00:23:23.685 14:42:32 -- common/autotest_common.sh@940 -- # kill -0 72973 00:23:23.685 14:42:32 -- common/autotest_common.sh@941 -- # uname 00:23:23.685 14:42:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:23.685 14:42:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72973 00:23:23.685 14:42:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:23.685 14:42:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:23.685 14:42:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72973' 00:23:23.685 killing process with pid 72973 00:23:23.685 14:42:32 -- common/autotest_common.sh@955 -- # kill 72973 00:23:23.685 14:42:32 -- common/autotest_common.sh@960 -- # wait 72973 00:23:26.967 00:23:26.967 real 0m5.346s 00:23:26.967 user 0m9.973s 00:23:26.967 sys 0m0.712s 00:23:26.967 14:42:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:26.967 ************************************ 00:23:26.967 END TEST nvme_rpc 00:23:26.967 ************************************ 00:23:26.967 14:42:34 -- common/autotest_common.sh@10 -- # set +x 00:23:26.967 14:42:34 -- spdk/autotest.sh@236 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:23:26.967 14:42:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:26.967 14:42:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:26.967 14:42:34 -- common/autotest_common.sh@10 -- # set +x 00:23:26.967 ************************************ 00:23:26.967 START TEST nvme_rpc_timeouts 00:23:26.967 ************************************ 00:23:26.967 14:42:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:23:26.967 * Looking for test storage... 00:23:26.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:26.967 14:42:35 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:26.967 14:42:35 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_73060 00:23:26.967 14:42:35 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_73060 00:23:26.967 14:42:35 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=73094 00:23:26.967 14:42:35 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:23:26.967 14:42:35 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:23:26.967 14:42:35 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 73094 00:23:26.967 14:42:35 -- common/autotest_common.sh@817 -- # '[' -z 73094 ']' 00:23:26.967 14:42:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.967 14:42:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:26.967 14:42:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.967 14:42:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:26.967 14:42:35 -- common/autotest_common.sh@10 -- # set +x 00:23:26.967 [2024-04-17 14:42:35.272844] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:23:26.967 [2024-04-17 14:42:35.272982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73094 ] 00:23:26.967 [2024-04-17 14:42:35.452145] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:27.225 [2024-04-17 14:42:35.757348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.225 [2024-04-17 14:42:35.757387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.598 14:42:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:28.598 14:42:36 -- common/autotest_common.sh@850 -- # return 0 00:23:28.598 Checking default timeout settings: 00:23:28.598 14:42:36 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:23:28.598 14:42:36 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:28.856 Making settings changes with rpc: 00:23:28.856 14:42:37 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:23:28.856 14:42:37 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:23:29.114 Check default vs. modified settings: 00:23:29.114 14:42:37 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:23:29.114 14:42:37 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_73060 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_73060 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:23:29.372 Setting action_on_timeout is changed as expected. 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_73060 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_73060 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:23:29.372 Setting timeout_us is changed as expected. 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_73060 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_73060 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:23:29.372 Setting timeout_admin_us is changed as expected. 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_73060 /tmp/settings_modified_73060 00:23:29.372 14:42:37 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 73094 00:23:29.372 14:42:37 -- common/autotest_common.sh@936 -- # '[' -z 73094 ']' 00:23:29.372 14:42:37 -- common/autotest_common.sh@940 -- # kill -0 73094 00:23:29.372 14:42:37 -- common/autotest_common.sh@941 -- # uname 00:23:29.372 14:42:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:29.372 14:42:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73094 00:23:29.372 14:42:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:29.372 killing process with pid 73094 00:23:29.372 14:42:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:29.372 14:42:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73094' 00:23:29.372 14:42:37 -- common/autotest_common.sh@955 -- # kill 73094 00:23:29.372 14:42:37 -- common/autotest_common.sh@960 -- # wait 73094 00:23:32.690 RPC TIMEOUT SETTING TEST PASSED. 00:23:32.690 14:42:40 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:23:32.690 00:23:32.690 real 0m5.670s 00:23:32.690 user 0m10.616s 00:23:32.690 sys 0m0.715s 00:23:32.690 14:42:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:32.690 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:32.690 ************************************ 00:23:32.690 END TEST nvme_rpc_timeouts 00:23:32.690 ************************************ 00:23:32.690 14:42:40 -- spdk/autotest.sh@240 -- # '[' 1 -eq 0 ']' 00:23:32.690 14:42:40 -- spdk/autotest.sh@244 -- # [[ 1 -eq 1 ]] 00:23:32.690 14:42:40 -- spdk/autotest.sh@245 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:23:32.690 14:42:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:32.690 14:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:32.690 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:32.690 ************************************ 00:23:32.690 START TEST nvme_xnvme 00:23:32.690 ************************************ 00:23:32.690 14:42:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:23:32.690 * Looking for test storage... 00:23:32.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:23:32.690 14:42:40 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:32.690 14:42:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.690 14:42:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.690 14:42:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.690 14:42:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.690 14:42:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.691 14:42:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.691 14:42:40 -- paths/export.sh@5 -- # export PATH 00:23:32.691 14:42:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.691 14:42:40 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:23:32.691 14:42:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:32.691 14:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:32.691 14:42:40 -- common/autotest_common.sh@10 -- # set +x 00:23:32.691 ************************************ 00:23:32.691 START TEST xnvme_to_malloc_dd_copy 00:23:32.691 ************************************ 00:23:32.691 14:42:41 -- common/autotest_common.sh@1111 -- # malloc_to_xnvme_copy 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:23:32.691 14:42:41 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:23:32.691 14:42:41 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:23:32.691 14:42:41 -- dd/common.sh@191 -- # return 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@18 -- # local io 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:23:32.691 14:42:41 -- xnvme/xnvme.sh@42 -- # gen_conf 00:23:32.691 14:42:41 -- dd/common.sh@31 -- # xtrace_disable 00:23:32.691 14:42:41 -- common/autotest_common.sh@10 -- # set +x 00:23:32.691 { 00:23:32.691 "subsystems": [ 00:23:32.691 { 00:23:32.691 "subsystem": "bdev", 00:23:32.691 "config": [ 00:23:32.691 { 00:23:32.691 "params": { 00:23:32.691 "block_size": 512, 00:23:32.691 "num_blocks": 2097152, 00:23:32.691 "name": "malloc0" 00:23:32.691 }, 00:23:32.691 "method": "bdev_malloc_create" 00:23:32.691 }, 00:23:32.691 { 00:23:32.691 "params": { 00:23:32.691 "io_mechanism": "libaio", 00:23:32.691 "filename": "/dev/nullb0", 00:23:32.691 "name": "null0" 00:23:32.691 }, 00:23:32.691 "method": "bdev_xnvme_create" 00:23:32.691 }, 00:23:32.691 { 00:23:32.691 "method": "bdev_wait_for_examine" 00:23:32.691 } 00:23:32.691 ] 00:23:32.691 } 00:23:32.691 ] 00:23:32.691 } 00:23:32.691 [2024-04-17 14:42:41.159935] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:23:32.691 [2024-04-17 14:42:41.160093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73249 ] 00:23:32.949 [2024-04-17 14:42:41.331415] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.208 [2024-04-17 14:42:41.597807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.298  Copying: 218/1024 [MB] (218 MBps) Copying: 433/1024 [MB] (215 MBps) Copying: 649/1024 [MB] (216 MBps) Copying: 875/1024 [MB] (225 MBps) Copying: 1024/1024 [MB] (average 219 MBps) 00:23:45.298 00:23:45.298 14:42:52 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:23:45.298 14:42:52 -- xnvme/xnvme.sh@47 -- # gen_conf 00:23:45.298 14:42:52 -- dd/common.sh@31 -- # xtrace_disable 00:23:45.298 14:42:52 -- common/autotest_common.sh@10 -- # set +x 00:23:45.298 { 00:23:45.298 "subsystems": [ 00:23:45.298 { 00:23:45.298 "subsystem": "bdev", 00:23:45.298 "config": [ 00:23:45.298 { 00:23:45.298 "params": { 00:23:45.298 "block_size": 512, 00:23:45.298 "num_blocks": 2097152, 00:23:45.298 "name": "malloc0" 00:23:45.298 }, 00:23:45.298 "method": "bdev_malloc_create" 00:23:45.298 }, 00:23:45.298 { 00:23:45.298 "params": { 00:23:45.298 "io_mechanism": "libaio", 00:23:45.298 "filename": "/dev/nullb0", 00:23:45.298 "name": "null0" 00:23:45.298 }, 00:23:45.298 "method": "bdev_xnvme_create" 00:23:45.298 }, 00:23:45.298 { 00:23:45.298 "method": "bdev_wait_for_examine" 00:23:45.298 } 00:23:45.298 ] 00:23:45.298 } 00:23:45.298 ] 00:23:45.298 } 00:23:45.298 [2024-04-17 14:42:53.013763] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:23:45.298 [2024-04-17 14:42:53.013968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73381 ] 00:23:45.298 [2024-04-17 14:42:53.199918] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.298 [2024-04-17 14:42:53.475935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.949  Copying: 216/1024 [MB] (216 MBps) Copying: 432/1024 [MB] (215 MBps) Copying: 647/1024 [MB] (215 MBps) Copying: 862/1024 [MB] (215 MBps) Copying: 1024/1024 [MB] (average 215 MBps) 00:23:56.949 00:23:56.949 14:43:05 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:23:56.949 14:43:05 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:23:56.949 14:43:05 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:23:56.949 14:43:05 -- xnvme/xnvme.sh@42 -- # gen_conf 00:23:56.949 14:43:05 -- dd/common.sh@31 -- # xtrace_disable 00:23:56.949 14:43:05 -- common/autotest_common.sh@10 -- # set +x 00:23:56.949 { 00:23:56.949 "subsystems": [ 00:23:56.949 { 00:23:56.949 "subsystem": "bdev", 00:23:56.949 "config": [ 00:23:56.949 { 00:23:56.949 "params": { 00:23:56.949 "block_size": 512, 00:23:56.949 "num_blocks": 2097152, 00:23:56.949 "name": "malloc0" 00:23:56.949 }, 00:23:56.949 "method": "bdev_malloc_create" 00:23:56.949 }, 00:23:56.949 { 00:23:56.949 "params": { 00:23:56.949 "io_mechanism": "io_uring", 00:23:56.949 "filename": "/dev/nullb0", 00:23:56.949 "name": "null0" 00:23:56.949 }, 00:23:56.949 "method": "bdev_xnvme_create" 00:23:56.949 }, 00:23:56.949 { 00:23:56.949 "method": "bdev_wait_for_examine" 00:23:56.949 } 00:23:56.949 ] 00:23:56.949 } 00:23:56.949 ] 00:23:56.949 } 00:23:56.949 [2024-04-17 14:43:05.237783] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:23:56.949 [2024-04-17 14:43:05.237973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73518 ] 00:23:56.949 [2024-04-17 14:43:05.441701] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.210 [2024-04-17 14:43:05.701071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.276  Copying: 224/1024 [MB] (224 MBps) Copying: 445/1024 [MB] (220 MBps) Copying: 646/1024 [MB] (200 MBps) Copying: 863/1024 [MB] (216 MBps) Copying: 1024/1024 [MB] (average 216 MBps) 00:24:09.276 00:24:09.276 14:43:17 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:24:09.276 14:43:17 -- xnvme/xnvme.sh@47 -- # gen_conf 00:24:09.276 14:43:17 -- dd/common.sh@31 -- # xtrace_disable 00:24:09.276 14:43:17 -- common/autotest_common.sh@10 -- # set +x 00:24:09.276 { 00:24:09.276 "subsystems": [ 00:24:09.276 { 00:24:09.276 "subsystem": "bdev", 00:24:09.276 "config": [ 00:24:09.276 { 00:24:09.276 "params": { 00:24:09.276 "block_size": 512, 00:24:09.276 "num_blocks": 2097152, 00:24:09.276 "name": "malloc0" 00:24:09.276 }, 00:24:09.276 "method": "bdev_malloc_create" 00:24:09.276 }, 00:24:09.276 { 00:24:09.276 "params": { 00:24:09.276 "io_mechanism": "io_uring", 00:24:09.276 "filename": "/dev/nullb0", 00:24:09.276 "name": "null0" 00:24:09.276 }, 00:24:09.276 "method": "bdev_xnvme_create" 00:24:09.276 }, 00:24:09.276 { 00:24:09.276 "method": "bdev_wait_for_examine" 00:24:09.276 } 00:24:09.276 ] 00:24:09.276 } 00:24:09.276 ] 00:24:09.276 } 00:24:09.276 [2024-04-17 14:43:17.368664] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:24:09.276 [2024-04-17 14:43:17.368793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73655 ] 00:24:09.276 [2024-04-17 14:43:17.537920] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.276 [2024-04-17 14:43:17.820776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.023  Copying: 250/1024 [MB] (250 MBps) Copying: 496/1024 [MB] (246 MBps) Copying: 739/1024 [MB] (243 MBps) Copying: 968/1024 [MB] (228 MBps) Copying: 1024/1024 [MB] (average 241 MBps) 00:24:21.023 00:24:21.023 14:43:29 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:24:21.023 14:43:29 -- dd/common.sh@195 -- # modprobe -r null_blk 00:24:21.023 00:24:21.023 real 0m48.190s 00:24:21.023 user 0m42.567s 00:24:21.023 sys 0m4.945s 00:24:21.023 14:43:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:21.023 ************************************ 00:24:21.023 END TEST xnvme_to_malloc_dd_copy 00:24:21.023 ************************************ 00:24:21.023 14:43:29 -- common/autotest_common.sh@10 -- # set +x 00:24:21.023 14:43:29 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:24:21.023 14:43:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:21.024 14:43:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:21.024 14:43:29 -- common/autotest_common.sh@10 -- # set +x 00:24:21.024 ************************************ 00:24:21.024 START TEST xnvme_bdevperf 00:24:21.024 ************************************ 00:24:21.024 14:43:29 -- common/autotest_common.sh@1111 -- # xnvme_bdevperf 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:24:21.024 14:43:29 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:24:21.024 14:43:29 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:24:21.024 14:43:29 -- dd/common.sh@191 -- # return 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@60 -- # local io 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:24:21.024 14:43:29 -- xnvme/xnvme.sh@74 -- # gen_conf 00:24:21.024 14:43:29 -- dd/common.sh@31 -- # xtrace_disable 00:24:21.024 14:43:29 -- common/autotest_common.sh@10 -- # set +x 00:24:21.024 { 00:24:21.024 "subsystems": [ 00:24:21.024 { 00:24:21.024 "subsystem": "bdev", 00:24:21.024 "config": [ 00:24:21.024 { 00:24:21.024 "params": { 00:24:21.024 "io_mechanism": "libaio", 00:24:21.024 "filename": "/dev/nullb0", 00:24:21.024 "name": "null0" 00:24:21.024 }, 00:24:21.024 "method": "bdev_xnvme_create" 00:24:21.024 }, 00:24:21.024 { 00:24:21.024 "method": "bdev_wait_for_examine" 00:24:21.024 } 00:24:21.024 ] 00:24:21.024 } 00:24:21.024 ] 00:24:21.024 } 00:24:21.024 [2024-04-17 14:43:29.485408] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:24:21.024 [2024-04-17 14:43:29.485563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73814 ] 00:24:21.310 [2024-04-17 14:43:29.657503] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.595 [2024-04-17 14:43:29.975023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.595 [2024-04-17 14:43:29.976611] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:24:22.167 [2024-04-17 14:43:30.475288] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:24:22.167 Running I/O for 5 seconds... 00:24:27.443 00:24:27.443 Latency(us) 00:24:27.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.443 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:24:27.443 null0 : 5.00 107425.22 419.63 0.00 0.00 592.23 165.79 2215.74 00:24:27.443 =================================================================================================================== 00:24:27.443 Total : 107425.22 419.63 0.00 0.00 592.23 165.79 2215.74 00:24:28.817 14:43:37 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:24:28.817 14:43:37 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:24:28.817 14:43:37 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:24:28.817 14:43:37 -- xnvme/xnvme.sh@74 -- # gen_conf 00:24:28.817 14:43:37 -- dd/common.sh@31 -- # xtrace_disable 00:24:28.817 14:43:37 -- common/autotest_common.sh@10 -- # set +x 00:24:28.817 { 00:24:28.817 "subsystems": [ 00:24:28.817 { 00:24:28.817 "subsystem": "bdev", 00:24:28.817 "config": [ 00:24:28.817 { 00:24:28.817 "params": { 00:24:28.817 "io_mechanism": "io_uring", 00:24:28.817 "filename": "/dev/nullb0", 00:24:28.817 "name": "null0" 00:24:28.817 }, 00:24:28.817 "method": "bdev_xnvme_create" 00:24:28.817 }, 00:24:28.817 { 00:24:28.817 "method": "bdev_wait_for_examine" 00:24:28.817 } 00:24:28.817 ] 00:24:28.817 } 00:24:28.817 ] 00:24:28.817 } 00:24:28.817 [2024-04-17 14:43:37.256900] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:24:28.817 [2024-04-17 14:43:37.257099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73899 ] 00:24:29.073 [2024-04-17 14:43:37.444852] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.331 [2024-04-17 14:43:37.725604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.331 [2024-04-17 14:43:37.726957] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:24:29.615 [2024-04-17 14:43:38.178707] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:24:29.615 Running I/O for 5 seconds... 00:24:34.888 00:24:34.888 Latency(us) 00:24:34.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.888 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:24:34.888 null0 : 5.00 174136.97 680.22 0.00 0.00 364.79 219.43 3682.50 00:24:34.888 =================================================================================================================== 00:24:34.888 Total : 174136.97 680.22 0.00 0.00 364.79 219.43 3682.50 00:24:36.299 14:43:44 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:24:36.299 14:43:44 -- dd/common.sh@195 -- # modprobe -r null_blk 00:24:36.299 00:24:36.299 real 0m15.340s 00:24:36.299 user 0m11.635s 00:24:36.299 sys 0m3.432s 00:24:36.299 14:43:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:36.299 14:43:44 -- common/autotest_common.sh@10 -- # set +x 00:24:36.299 ************************************ 00:24:36.299 END TEST xnvme_bdevperf 00:24:36.299 ************************************ 00:24:36.299 00:24:36.299 real 1m3.903s 00:24:36.299 user 0m54.316s 00:24:36.299 sys 0m8.605s 00:24:36.299 14:43:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:36.299 14:43:44 -- common/autotest_common.sh@10 -- # set +x 00:24:36.299 ************************************ 00:24:36.299 END TEST nvme_xnvme 00:24:36.299 ************************************ 00:24:36.299 14:43:44 -- spdk/autotest.sh@246 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:24:36.299 14:43:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:36.299 14:43:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:36.299 14:43:44 -- common/autotest_common.sh@10 -- # set +x 00:24:36.299 ************************************ 00:24:36.299 START TEST blockdev_xnvme 00:24:36.299 ************************************ 00:24:36.299 14:43:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:24:36.558 * Looking for test storage... 00:24:36.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:36.558 14:43:44 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:36.558 14:43:44 -- bdev/nbd_common.sh@6 -- # set -e 00:24:36.558 14:43:44 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:36.558 14:43:44 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:36.558 14:43:44 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:36.558 14:43:44 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:36.558 14:43:44 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:24:36.558 14:43:44 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:24:36.558 14:43:44 -- bdev/blockdev.sh@20 -- # : 00:24:36.558 14:43:45 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:24:36.558 14:43:45 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:24:36.558 14:43:45 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:24:36.558 14:43:45 -- bdev/blockdev.sh@674 -- # uname -s 00:24:36.558 14:43:45 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:24:36.558 14:43:45 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:24:36.558 14:43:45 -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:24:36.558 14:43:45 -- bdev/blockdev.sh@683 -- # crypto_device= 00:24:36.558 14:43:45 -- bdev/blockdev.sh@684 -- # dek= 00:24:36.558 14:43:45 -- bdev/blockdev.sh@685 -- # env_ctx= 00:24:36.558 14:43:45 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:24:36.558 14:43:45 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:24:36.558 14:43:45 -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:24:36.558 14:43:45 -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:24:36.558 14:43:45 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:24:36.558 14:43:45 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74055 00:24:36.558 14:43:45 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:36.558 14:43:45 -- bdev/blockdev.sh@49 -- # waitforlisten 74055 00:24:36.558 14:43:45 -- common/autotest_common.sh@817 -- # '[' -z 74055 ']' 00:24:36.558 14:43:45 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:36.558 14:43:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.558 14:43:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:36.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.558 14:43:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.558 14:43:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:36.558 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:24:36.558 [2024-04-17 14:43:45.143722] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:24:36.558 [2024-04-17 14:43:45.143909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74055 ] 00:24:36.815 [2024-04-17 14:43:45.331979] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.073 [2024-04-17 14:43:45.663332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.459 14:43:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:38.459 14:43:46 -- common/autotest_common.sh@850 -- # return 0 00:24:38.459 14:43:46 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:24:38.459 14:43:46 -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:24:38.459 14:43:46 -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:24:38.459 14:43:46 -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:24:38.459 14:43:46 -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:38.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:38.984 Waiting for block devices as requested 00:24:38.984 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:38.984 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:39.242 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:24:39.242 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:24:44.508 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:24:44.508 14:43:52 -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:24:44.508 14:43:52 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:24:44.508 14:43:52 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:24:44.508 14:43:52 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:24:44.508 14:43:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:24:44.508 14:43:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:24:44.508 14:43:52 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:44.508 14:43:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:24:44.508 14:43:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:24:44.508 14:43:52 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:44.508 14:43:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:24:44.508 14:43:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:24:44.508 14:43:52 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:24:44.508 14:43:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:24:44.508 14:43:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:24:44.508 14:43:52 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:24:44.508 14:43:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:24:44.508 14:43:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:24:44.508 14:43:52 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:24:44.508 14:43:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:24:44.508 14:43:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:24:44.508 14:43:52 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:24:44.508 14:43:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:24:44.508 14:43:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:24:44.508 14:43:52 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:24:44.508 14:43:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:24:44.508 14:43:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:44.508 14:43:52 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:44.508 14:43:52 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:44.508 14:43:52 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:44.508 14:43:52 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:44.508 14:43:52 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:24:44.508 14:43:52 -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:24:44.508 14:43:52 -- bdev/blockdev.sh@100 -- # rpc_cmd 00:24:44.508 14:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.508 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:44.508 14:43:52 -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:24:44.508 nvme0n1 00:24:44.508 nvme1n1 00:24:44.508 nvme2n1 00:24:44.508 nvme2n2 00:24:44.508 nvme2n3 00:24:44.508 nvme3n1 00:24:44.508 14:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:24:44.508 14:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.508 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:44.508 14:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@740 -- # cat 00:24:44.508 14:43:52 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:24:44.508 14:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.508 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:44.508 14:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:24:44.508 14:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.508 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:44.508 14:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.508 14:43:52 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:44.508 14:43:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.508 14:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:44.508 14:43:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.508 14:43:53 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:24:44.508 14:43:53 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:24:44.508 14:43:53 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:24:44.508 14:43:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.508 14:43:53 -- common/autotest_common.sh@10 -- # set +x 00:24:44.508 14:43:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.508 14:43:53 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:24:44.508 14:43:53 -- bdev/blockdev.sh@749 -- # jq -r .name 00:24:44.509 14:43:53 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "940e8dd1-394a-49bc-a006-7fc2c64d7250"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "940e8dd1-394a-49bc-a006-7fc2c64d7250",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b376d1b7-9d11-4602-ab4d-e1805ae2ff58"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b376d1b7-9d11-4602-ab4d-e1805ae2ff58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d371a320-6a92-4beb-a597-78c9db6cd0eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d371a320-6a92-4beb-a597-78c9db6cd0eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "f9fe658c-46e2-4c62-9723-0d48b52ec728"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f9fe658c-46e2-4c62-9723-0d48b52ec728",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "005a40ee-12b5-4044-9967-17ee86edd5ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "005a40ee-12b5-4044-9967-17ee86edd5ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9ec5ef6e-3326-488d-b20d-687f5cc964b3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9ec5ef6e-3326-488d-b20d-687f5cc964b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:24:44.509 14:43:53 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:24:44.509 14:43:53 -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:24:44.509 14:43:53 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:24:44.509 14:43:53 -- bdev/blockdev.sh@754 -- # killprocess 74055 00:24:44.509 14:43:53 -- common/autotest_common.sh@936 -- # '[' -z 74055 ']' 00:24:44.509 14:43:53 -- common/autotest_common.sh@940 -- # kill -0 74055 00:24:44.509 14:43:53 -- common/autotest_common.sh@941 -- # uname 00:24:44.509 14:43:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:44.509 14:43:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74055 00:24:44.767 14:43:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:44.767 14:43:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:44.767 14:43:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74055' 00:24:44.767 killing process with pid 74055 00:24:44.767 14:43:53 -- common/autotest_common.sh@955 -- # kill 74055 00:24:44.767 14:43:53 -- common/autotest_common.sh@960 -- # wait 74055 00:24:48.051 14:43:56 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:48.051 14:43:56 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:24:48.051 14:43:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:24:48.051 14:43:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:48.051 14:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:48.051 ************************************ 00:24:48.051 START TEST bdev_hello_world 00:24:48.051 ************************************ 00:24:48.051 14:43:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:24:48.051 [2024-04-17 14:43:56.589373] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:24:48.051 [2024-04-17 14:43:56.589614] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74438 ] 00:24:48.309 [2024-04-17 14:43:56.766391] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.568 [2024-04-17 14:43:57.044030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.134 [2024-04-17 14:43:57.600023] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:49.134 [2024-04-17 14:43:57.600116] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:24:49.134 [2024-04-17 14:43:57.600154] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:49.134 [2024-04-17 14:43:57.603080] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:49.134 [2024-04-17 14:43:57.603444] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:49.134 [2024-04-17 14:43:57.603515] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:49.134 [2024-04-17 14:43:57.603715] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:49.134 00:24:49.134 [2024-04-17 14:43:57.603768] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:51.035 00:24:51.035 real 0m2.641s 00:24:51.035 user 0m2.233s 00:24:51.035 sys 0m0.282s 00:24:51.035 14:43:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:51.035 ************************************ 00:24:51.035 END TEST bdev_hello_world 00:24:51.035 ************************************ 00:24:51.035 14:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:51.035 14:43:59 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:24:51.035 14:43:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:51.035 14:43:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:51.035 14:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:51.035 ************************************ 00:24:51.035 START TEST bdev_bounds 00:24:51.035 ************************************ 00:24:51.035 14:43:59 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:24:51.035 14:43:59 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:51.035 14:43:59 -- bdev/blockdev.sh@290 -- # bdevio_pid=74490 00:24:51.035 14:43:59 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:51.035 Process bdevio pid: 74490 00:24:51.035 14:43:59 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 74490' 00:24:51.035 14:43:59 -- bdev/blockdev.sh@293 -- # waitforlisten 74490 00:24:51.035 14:43:59 -- common/autotest_common.sh@817 -- # '[' -z 74490 ']' 00:24:51.035 14:43:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.035 14:43:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:51.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.035 14:43:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.035 14:43:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:51.035 14:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:51.035 [2024-04-17 14:43:59.334036] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:24:51.035 [2024-04-17 14:43:59.334177] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74490 ] 00:24:51.035 [2024-04-17 14:43:59.500926] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:51.293 [2024-04-17 14:43:59.818526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.293 [2024-04-17 14:43:59.818621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.293 [2024-04-17 14:43:59.818632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.860 14:44:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:51.860 14:44:00 -- common/autotest_common.sh@850 -- # return 0 00:24:51.860 14:44:00 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:52.118 I/O targets: 00:24:52.118 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:24:52.118 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:24:52.118 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:52.118 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:52.118 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:52.118 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:24:52.118 00:24:52.118 00:24:52.118 CUnit - A unit testing framework for C - Version 2.1-3 00:24:52.118 http://cunit.sourceforge.net/ 00:24:52.118 00:24:52.118 00:24:52.118 Suite: bdevio tests on: nvme3n1 00:24:52.118 Test: blockdev write read block ...passed 00:24:52.118 Test: blockdev write zeroes read block ...passed 00:24:52.118 Test: blockdev write zeroes read no split ...passed 00:24:52.118 Test: blockdev write zeroes read split ...passed 00:24:52.118 Test: blockdev write zeroes read split partial ...passed 00:24:52.118 Test: blockdev reset ...passed 00:24:52.118 Test: blockdev write read 8 blocks ...passed 00:24:52.118 Test: blockdev write read size > 128k ...passed 00:24:52.118 Test: blockdev write read invalid size ...passed 00:24:52.118 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:52.118 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:52.118 Test: blockdev write read max offset ...passed 00:24:52.118 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:52.118 Test: blockdev writev readv 8 blocks ...passed 00:24:52.118 Test: blockdev writev readv 30 x 1block ...passed 00:24:52.118 Test: blockdev writev readv block ...passed 00:24:52.118 Test: blockdev writev readv size > 128k ...passed 00:24:52.118 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:52.118 Test: blockdev comparev and writev ...passed 00:24:52.118 Test: blockdev nvme passthru rw ...passed 00:24:52.118 Test: blockdev nvme passthru vendor specific ...passed 00:24:52.118 Test: blockdev nvme admin passthru ...passed 00:24:52.118 Test: blockdev copy ...passed 00:24:52.118 Suite: bdevio tests on: nvme2n3 00:24:52.118 Test: blockdev write read block ...passed 00:24:52.118 Test: blockdev write zeroes read block ...passed 00:24:52.119 Test: blockdev write zeroes read no split ...passed 00:24:52.119 Test: blockdev write zeroes read split ...passed 00:24:52.119 Test: blockdev write zeroes read split partial ...passed 00:24:52.119 Test: blockdev reset ...passed 00:24:52.119 Test: blockdev write read 8 blocks ...passed 00:24:52.119 Test: blockdev write read size > 128k ...passed 00:24:52.119 Test: blockdev write read invalid size ...passed 00:24:52.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:52.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:52.119 Test: blockdev write read max offset ...passed 00:24:52.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:52.119 Test: blockdev writev readv 8 blocks ...passed 00:24:52.119 Test: blockdev writev readv 30 x 1block ...passed 00:24:52.119 Test: blockdev writev readv block ...passed 00:24:52.119 Test: blockdev writev readv size > 128k ...passed 00:24:52.119 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:52.119 Test: blockdev comparev and writev ...passed 00:24:52.119 Test: blockdev nvme passthru rw ...passed 00:24:52.119 Test: blockdev nvme passthru vendor specific ...passed 00:24:52.119 Test: blockdev nvme admin passthru ...passed 00:24:52.119 Test: blockdev copy ...passed 00:24:52.119 Suite: bdevio tests on: nvme2n2 00:24:52.119 Test: blockdev write read block ...passed 00:24:52.119 Test: blockdev write zeroes read block ...passed 00:24:52.119 Test: blockdev write zeroes read no split ...passed 00:24:52.377 Test: blockdev write zeroes read split ...passed 00:24:52.377 Test: blockdev write zeroes read split partial ...passed 00:24:52.377 Test: blockdev reset ...passed 00:24:52.377 Test: blockdev write read 8 blocks ...passed 00:24:52.377 Test: blockdev write read size > 128k ...passed 00:24:52.377 Test: blockdev write read invalid size ...passed 00:24:52.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:52.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:52.377 Test: blockdev write read max offset ...passed 00:24:52.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:52.377 Test: blockdev writev readv 8 blocks ...passed 00:24:52.377 Test: blockdev writev readv 30 x 1block ...passed 00:24:52.377 Test: blockdev writev readv block ...passed 00:24:52.377 Test: blockdev writev readv size > 128k ...passed 00:24:52.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:52.377 Test: blockdev comparev and writev ...passed 00:24:52.377 Test: blockdev nvme passthru rw ...passed 00:24:52.377 Test: blockdev nvme passthru vendor specific ...passed 00:24:52.377 Test: blockdev nvme admin passthru ...passed 00:24:52.377 Test: blockdev copy ...passed 00:24:52.377 Suite: bdevio tests on: nvme2n1 00:24:52.377 Test: blockdev write read block ...passed 00:24:52.377 Test: blockdev write zeroes read block ...passed 00:24:52.377 Test: blockdev write zeroes read no split ...passed 00:24:52.377 Test: blockdev write zeroes read split ...passed 00:24:52.377 Test: blockdev write zeroes read split partial ...passed 00:24:52.377 Test: blockdev reset ...passed 00:24:52.377 Test: blockdev write read 8 blocks ...passed 00:24:52.377 Test: blockdev write read size > 128k ...passed 00:24:52.377 Test: blockdev write read invalid size ...passed 00:24:52.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:52.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:52.377 Test: blockdev write read max offset ...passed 00:24:52.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:52.377 Test: blockdev writev readv 8 blocks ...passed 00:24:52.377 Test: blockdev writev readv 30 x 1block ...passed 00:24:52.377 Test: blockdev writev readv block ...passed 00:24:52.377 Test: blockdev writev readv size > 128k ...passed 00:24:52.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:52.377 Test: blockdev comparev and writev ...passed 00:24:52.377 Test: blockdev nvme passthru rw ...passed 00:24:52.377 Test: blockdev nvme passthru vendor specific ...passed 00:24:52.377 Test: blockdev nvme admin passthru ...passed 00:24:52.377 Test: blockdev copy ...passed 00:24:52.377 Suite: bdevio tests on: nvme1n1 00:24:52.377 Test: blockdev write read block ...passed 00:24:52.377 Test: blockdev write zeroes read block ...passed 00:24:52.377 Test: blockdev write zeroes read no split ...passed 00:24:52.377 Test: blockdev write zeroes read split ...passed 00:24:52.377 Test: blockdev write zeroes read split partial ...passed 00:24:52.377 Test: blockdev reset ...passed 00:24:52.377 Test: blockdev write read 8 blocks ...passed 00:24:52.377 Test: blockdev write read size > 128k ...passed 00:24:52.377 Test: blockdev write read invalid size ...passed 00:24:52.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:52.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:52.377 Test: blockdev write read max offset ...passed 00:24:52.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:52.666 Test: blockdev writev readv 8 blocks ...passed 00:24:52.666 Test: blockdev writev readv 30 x 1block ...passed 00:24:52.666 Test: blockdev writev readv block ...passed 00:24:52.666 Test: blockdev writev readv size > 128k ...passed 00:24:52.666 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:52.666 Test: blockdev comparev and writev ...passed 00:24:52.666 Test: blockdev nvme passthru rw ...passed 00:24:52.666 Test: blockdev nvme passthru vendor specific ...passed 00:24:52.666 Test: blockdev nvme admin passthru ...passed 00:24:52.666 Test: blockdev copy ...passed 00:24:52.666 Suite: bdevio tests on: nvme0n1 00:24:52.666 Test: blockdev write read block ...passed 00:24:52.666 Test: blockdev write zeroes read block ...passed 00:24:52.666 Test: blockdev write zeroes read no split ...passed 00:24:52.666 Test: blockdev write zeroes read split ...passed 00:24:52.666 Test: blockdev write zeroes read split partial ...passed 00:24:52.666 Test: blockdev reset ...passed 00:24:52.666 Test: blockdev write read 8 blocks ...passed 00:24:52.666 Test: blockdev write read size > 128k ...passed 00:24:52.666 Test: blockdev write read invalid size ...passed 00:24:52.666 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:52.666 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:52.666 Test: blockdev write read max offset ...passed 00:24:52.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:52.666 Test: blockdev writev readv 8 blocks ...passed 00:24:52.666 Test: blockdev writev readv 30 x 1block ...passed 00:24:52.666 Test: blockdev writev readv block ...passed 00:24:52.666 Test: blockdev writev readv size > 128k ...passed 00:24:52.666 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:52.666 Test: blockdev comparev and writev ...passed 00:24:52.666 Test: blockdev nvme passthru rw ...passed 00:24:52.666 Test: blockdev nvme passthru vendor specific ...passed 00:24:52.666 Test: blockdev nvme admin passthru ...passed 00:24:52.667 Test: blockdev copy ...passed 00:24:52.667 00:24:52.667 Run Summary: Type Total Ran Passed Failed Inactive 00:24:52.667 suites 6 6 n/a 0 0 00:24:52.667 tests 138 138 138 0 0 00:24:52.667 asserts 780 780 780 0 n/a 00:24:52.667 00:24:52.667 Elapsed time = 1.565 seconds 00:24:52.667 0 00:24:52.667 14:44:01 -- bdev/blockdev.sh@295 -- # killprocess 74490 00:24:52.667 14:44:01 -- common/autotest_common.sh@936 -- # '[' -z 74490 ']' 00:24:52.667 14:44:01 -- common/autotest_common.sh@940 -- # kill -0 74490 00:24:52.667 14:44:01 -- common/autotest_common.sh@941 -- # uname 00:24:52.667 14:44:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:52.667 14:44:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74490 00:24:52.667 14:44:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:52.667 killing process with pid 74490 00:24:52.667 14:44:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:52.667 14:44:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74490' 00:24:52.667 14:44:01 -- common/autotest_common.sh@955 -- # kill 74490 00:24:52.667 14:44:01 -- common/autotest_common.sh@960 -- # wait 74490 00:24:54.044 14:44:02 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:24:54.044 00:24:54.044 real 0m3.386s 00:24:54.044 user 0m7.842s 00:24:54.044 sys 0m0.443s 00:24:54.044 14:44:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:54.044 14:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:54.044 ************************************ 00:24:54.044 END TEST bdev_bounds 00:24:54.044 ************************************ 00:24:54.302 14:44:02 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:24:54.302 14:44:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:24:54.302 14:44:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:54.302 14:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:54.302 ************************************ 00:24:54.302 START TEST bdev_nbd 00:24:54.302 ************************************ 00:24:54.302 14:44:02 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:24:54.302 14:44:02 -- bdev/blockdev.sh@300 -- # uname -s 00:24:54.303 14:44:02 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:24:54.303 14:44:02 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:54.303 14:44:02 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:54.303 14:44:02 -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:54.303 14:44:02 -- bdev/blockdev.sh@304 -- # local bdev_all 00:24:54.303 14:44:02 -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:24:54.303 14:44:02 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:24:54.303 14:44:02 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:54.303 14:44:02 -- bdev/blockdev.sh@311 -- # local nbd_all 00:24:54.303 14:44:02 -- bdev/blockdev.sh@312 -- # bdev_num=6 00:24:54.303 14:44:02 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:54.303 14:44:02 -- bdev/blockdev.sh@314 -- # local nbd_list 00:24:54.303 14:44:02 -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:54.303 14:44:02 -- bdev/blockdev.sh@315 -- # local bdev_list 00:24:54.303 14:44:02 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:54.303 14:44:02 -- bdev/blockdev.sh@318 -- # nbd_pid=74570 00:24:54.303 14:44:02 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:54.303 14:44:02 -- bdev/blockdev.sh@320 -- # waitforlisten 74570 /var/tmp/spdk-nbd.sock 00:24:54.303 14:44:02 -- common/autotest_common.sh@817 -- # '[' -z 74570 ']' 00:24:54.303 14:44:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:54.303 14:44:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:54.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:54.303 14:44:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:54.303 14:44:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:54.303 14:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:54.561 [2024-04-17 14:44:02.906000] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:24:54.561 [2024-04-17 14:44:02.906266] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.561 [2024-04-17 14:44:03.107519] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.819 [2024-04-17 14:44:03.390725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.384 14:44:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:55.384 14:44:03 -- common/autotest_common.sh@850 -- # return 0 00:24:55.384 14:44:03 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@24 -- # local i 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:55.384 14:44:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:24:55.950 14:44:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:55.950 14:44:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:55.950 14:44:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:55.950 14:44:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:55.950 14:44:04 -- common/autotest_common.sh@855 -- # local i 00:24:55.950 14:44:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:55.950 14:44:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:55.950 14:44:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:55.950 14:44:04 -- common/autotest_common.sh@859 -- # break 00:24:55.950 14:44:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:55.950 14:44:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:55.950 14:44:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:55.950 1+0 records in 00:24:55.950 1+0 records out 00:24:55.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493236 s, 8.3 MB/s 00:24:55.950 14:44:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:55.950 14:44:04 -- common/autotest_common.sh@872 -- # size=4096 00:24:55.950 14:44:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:55.950 14:44:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:55.950 14:44:04 -- common/autotest_common.sh@875 -- # return 0 00:24:55.950 14:44:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:55.950 14:44:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:55.951 14:44:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:24:56.209 14:44:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:24:56.209 14:44:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:24:56.209 14:44:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:24:56.209 14:44:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:56.209 14:44:04 -- common/autotest_common.sh@855 -- # local i 00:24:56.209 14:44:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:56.209 14:44:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:56.209 14:44:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:56.209 14:44:04 -- common/autotest_common.sh@859 -- # break 00:24:56.209 14:44:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:56.209 14:44:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:56.209 14:44:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:56.209 1+0 records in 00:24:56.209 1+0 records out 00:24:56.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634085 s, 6.5 MB/s 00:24:56.209 14:44:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:56.209 14:44:04 -- common/autotest_common.sh@872 -- # size=4096 00:24:56.209 14:44:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:56.209 14:44:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:56.209 14:44:04 -- common/autotest_common.sh@875 -- # return 0 00:24:56.209 14:44:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:56.209 14:44:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:56.209 14:44:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:24:56.468 14:44:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:24:56.468 14:44:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:24:56.468 14:44:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:24:56.468 14:44:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:24:56.468 14:44:04 -- common/autotest_common.sh@855 -- # local i 00:24:56.468 14:44:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:56.468 14:44:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:56.468 14:44:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:24:56.468 14:44:04 -- common/autotest_common.sh@859 -- # break 00:24:56.468 14:44:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:56.468 14:44:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:56.468 14:44:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:56.468 1+0 records in 00:24:56.468 1+0 records out 00:24:56.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585745 s, 7.0 MB/s 00:24:56.468 14:44:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:56.468 14:44:04 -- common/autotest_common.sh@872 -- # size=4096 00:24:56.468 14:44:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:56.468 14:44:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:56.468 14:44:04 -- common/autotest_common.sh@875 -- # return 0 00:24:56.468 14:44:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:56.468 14:44:04 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:56.468 14:44:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:24:56.727 14:44:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:24:56.727 14:44:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:24:56.727 14:44:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:24:56.727 14:44:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:24:56.727 14:44:05 -- common/autotest_common.sh@855 -- # local i 00:24:56.727 14:44:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:56.727 14:44:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:56.727 14:44:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:24:56.727 14:44:05 -- common/autotest_common.sh@859 -- # break 00:24:56.727 14:44:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:56.727 14:44:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:56.727 14:44:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:56.727 1+0 records in 00:24:56.727 1+0 records out 00:24:56.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462342 s, 8.9 MB/s 00:24:56.727 14:44:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:56.727 14:44:05 -- common/autotest_common.sh@872 -- # size=4096 00:24:56.727 14:44:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:56.727 14:44:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:56.727 14:44:05 -- common/autotest_common.sh@875 -- # return 0 00:24:56.727 14:44:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:56.727 14:44:05 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:56.727 14:44:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:24:56.986 14:44:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:24:56.986 14:44:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:24:56.986 14:44:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:24:56.986 14:44:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:24:56.986 14:44:05 -- common/autotest_common.sh@855 -- # local i 00:24:56.986 14:44:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:56.986 14:44:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:56.986 14:44:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:24:56.986 14:44:05 -- common/autotest_common.sh@859 -- # break 00:24:56.986 14:44:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:56.986 14:44:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:56.986 14:44:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:56.986 1+0 records in 00:24:56.986 1+0 records out 00:24:56.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060491 s, 6.8 MB/s 00:24:56.986 14:44:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:56.986 14:44:05 -- common/autotest_common.sh@872 -- # size=4096 00:24:56.986 14:44:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:56.986 14:44:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:56.986 14:44:05 -- common/autotest_common.sh@875 -- # return 0 00:24:56.986 14:44:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:56.986 14:44:05 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:56.986 14:44:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:24:57.254 14:44:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:24:57.254 14:44:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:24:57.254 14:44:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:24:57.254 14:44:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:24:57.254 14:44:05 -- common/autotest_common.sh@855 -- # local i 00:24:57.254 14:44:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:57.254 14:44:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:57.254 14:44:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:24:57.254 14:44:05 -- common/autotest_common.sh@859 -- # break 00:24:57.254 14:44:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:57.254 14:44:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:57.254 14:44:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:57.254 1+0 records in 00:24:57.254 1+0 records out 00:24:57.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000822614 s, 5.0 MB/s 00:24:57.254 14:44:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.254 14:44:05 -- common/autotest_common.sh@872 -- # size=4096 00:24:57.254 14:44:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.254 14:44:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:57.254 14:44:05 -- common/autotest_common.sh@875 -- # return 0 00:24:57.254 14:44:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:57.254 14:44:05 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:57.254 14:44:05 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:57.513 14:44:06 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd0", 00:24:57.513 "bdev_name": "nvme0n1" 00:24:57.513 }, 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd1", 00:24:57.513 "bdev_name": "nvme1n1" 00:24:57.513 }, 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd2", 00:24:57.513 "bdev_name": "nvme2n1" 00:24:57.513 }, 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd3", 00:24:57.513 "bdev_name": "nvme2n2" 00:24:57.513 }, 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd4", 00:24:57.513 "bdev_name": "nvme2n3" 00:24:57.513 }, 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd5", 00:24:57.513 "bdev_name": "nvme3n1" 00:24:57.513 } 00:24:57.513 ]' 00:24:57.513 14:44:06 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:57.513 14:44:06 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:57.513 14:44:06 -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd0", 00:24:57.513 "bdev_name": "nvme0n1" 00:24:57.513 }, 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd1", 00:24:57.513 "bdev_name": "nvme1n1" 00:24:57.513 }, 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd2", 00:24:57.513 "bdev_name": "nvme2n1" 00:24:57.513 }, 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd3", 00:24:57.513 "bdev_name": "nvme2n2" 00:24:57.513 }, 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd4", 00:24:57.513 "bdev_name": "nvme2n3" 00:24:57.513 }, 00:24:57.513 { 00:24:57.513 "nbd_device": "/dev/nbd5", 00:24:57.513 "bdev_name": "nvme3n1" 00:24:57.513 } 00:24:57.513 ]' 00:24:57.513 14:44:06 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:24:57.513 14:44:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:57.513 14:44:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:24:57.513 14:44:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:57.513 14:44:06 -- bdev/nbd_common.sh@51 -- # local i 00:24:57.513 14:44:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:57.513 14:44:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:57.772 14:44:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:57.772 14:44:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:57.772 14:44:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:57.772 14:44:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:57.772 14:44:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:57.772 14:44:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:57.772 14:44:06 -- bdev/nbd_common.sh@41 -- # break 00:24:57.772 14:44:06 -- bdev/nbd_common.sh@45 -- # return 0 00:24:57.772 14:44:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:57.772 14:44:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:58.030 14:44:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:58.031 14:44:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:58.031 14:44:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:58.031 14:44:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:58.031 14:44:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:58.031 14:44:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:58.031 14:44:06 -- bdev/nbd_common.sh@41 -- # break 00:24:58.031 14:44:06 -- bdev/nbd_common.sh@45 -- # return 0 00:24:58.031 14:44:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:58.031 14:44:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:24:58.289 14:44:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:24:58.289 14:44:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:24:58.289 14:44:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:24:58.289 14:44:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:58.289 14:44:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:58.289 14:44:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:24:58.289 14:44:06 -- bdev/nbd_common.sh@41 -- # break 00:24:58.289 14:44:06 -- bdev/nbd_common.sh@45 -- # return 0 00:24:58.289 14:44:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:58.289 14:44:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:24:58.549 14:44:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:24:58.549 14:44:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:24:58.549 14:44:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:24:58.549 14:44:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:58.549 14:44:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:58.549 14:44:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:24:58.549 14:44:07 -- bdev/nbd_common.sh@41 -- # break 00:24:58.549 14:44:07 -- bdev/nbd_common.sh@45 -- # return 0 00:24:58.549 14:44:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:58.549 14:44:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:24:58.808 14:44:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:24:58.808 14:44:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:24:58.808 14:44:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:24:58.808 14:44:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:58.808 14:44:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:58.808 14:44:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:24:58.808 14:44:07 -- bdev/nbd_common.sh@41 -- # break 00:24:58.808 14:44:07 -- bdev/nbd_common.sh@45 -- # return 0 00:24:58.808 14:44:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:58.808 14:44:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:24:59.374 14:44:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:24:59.374 14:44:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:24:59.374 14:44:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:24:59.374 14:44:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:59.374 14:44:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:59.374 14:44:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:24:59.374 14:44:07 -- bdev/nbd_common.sh@41 -- # break 00:24:59.374 14:44:07 -- bdev/nbd_common.sh@45 -- # return 0 00:24:59.374 14:44:07 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:59.374 14:44:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:59.374 14:44:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:59.633 14:44:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:59.633 14:44:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:59.633 14:44:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@65 -- # true 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@65 -- # count=0 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@122 -- # count=0 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@127 -- # return 0 00:24:59.633 14:44:08 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@12 -- # local i 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:59.633 14:44:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:24:59.894 /dev/nbd0 00:24:59.894 14:44:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:59.894 14:44:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:59.894 14:44:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:59.894 14:44:08 -- common/autotest_common.sh@855 -- # local i 00:24:59.894 14:44:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:59.895 14:44:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:59.895 14:44:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:59.895 14:44:08 -- common/autotest_common.sh@859 -- # break 00:24:59.895 14:44:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:59.895 14:44:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:59.895 14:44:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:59.895 1+0 records in 00:24:59.895 1+0 records out 00:24:59.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503966 s, 8.1 MB/s 00:24:59.895 14:44:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:59.895 14:44:08 -- common/autotest_common.sh@872 -- # size=4096 00:24:59.895 14:44:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:59.895 14:44:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:59.895 14:44:08 -- common/autotest_common.sh@875 -- # return 0 00:24:59.895 14:44:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:59.895 14:44:08 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:59.895 14:44:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:25:00.159 /dev/nbd1 00:25:00.159 14:44:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:00.159 14:44:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:00.159 14:44:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:00.159 14:44:08 -- common/autotest_common.sh@855 -- # local i 00:25:00.159 14:44:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:00.159 14:44:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:00.159 14:44:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:00.159 14:44:08 -- common/autotest_common.sh@859 -- # break 00:25:00.159 14:44:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:00.159 14:44:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:00.159 14:44:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:00.159 1+0 records in 00:25:00.159 1+0 records out 00:25:00.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000893426 s, 4.6 MB/s 00:25:00.159 14:44:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.159 14:44:08 -- common/autotest_common.sh@872 -- # size=4096 00:25:00.159 14:44:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.159 14:44:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:00.159 14:44:08 -- common/autotest_common.sh@875 -- # return 0 00:25:00.159 14:44:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:00.159 14:44:08 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:00.159 14:44:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:25:00.427 /dev/nbd10 00:25:00.427 14:44:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:25:00.427 14:44:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:25:00.427 14:44:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:25:00.427 14:44:08 -- common/autotest_common.sh@855 -- # local i 00:25:00.427 14:44:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:00.427 14:44:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:00.427 14:44:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:25:00.427 14:44:08 -- common/autotest_common.sh@859 -- # break 00:25:00.427 14:44:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:00.427 14:44:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:00.427 14:44:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:00.427 1+0 records in 00:25:00.427 1+0 records out 00:25:00.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696871 s, 5.9 MB/s 00:25:00.427 14:44:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.427 14:44:08 -- common/autotest_common.sh@872 -- # size=4096 00:25:00.427 14:44:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.427 14:44:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:00.427 14:44:08 -- common/autotest_common.sh@875 -- # return 0 00:25:00.427 14:44:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:00.427 14:44:08 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:00.427 14:44:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:25:00.694 /dev/nbd11 00:25:00.694 14:44:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:25:00.694 14:44:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:25:00.694 14:44:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:25:00.694 14:44:09 -- common/autotest_common.sh@855 -- # local i 00:25:00.694 14:44:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:00.694 14:44:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:00.694 14:44:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:25:00.694 14:44:09 -- common/autotest_common.sh@859 -- # break 00:25:00.694 14:44:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:00.694 14:44:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:00.694 14:44:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:00.694 1+0 records in 00:25:00.694 1+0 records out 00:25:00.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622745 s, 6.6 MB/s 00:25:00.694 14:44:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.694 14:44:09 -- common/autotest_common.sh@872 -- # size=4096 00:25:00.694 14:44:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.694 14:44:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:00.694 14:44:09 -- common/autotest_common.sh@875 -- # return 0 00:25:00.694 14:44:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:00.694 14:44:09 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:00.694 14:44:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:25:00.965 /dev/nbd12 00:25:00.966 14:44:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:25:00.966 14:44:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:25:00.966 14:44:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:25:00.966 14:44:09 -- common/autotest_common.sh@855 -- # local i 00:25:00.966 14:44:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:00.966 14:44:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:00.966 14:44:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:25:00.966 14:44:09 -- common/autotest_common.sh@859 -- # break 00:25:00.966 14:44:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:00.966 14:44:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:00.966 14:44:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:00.966 1+0 records in 00:25:00.966 1+0 records out 00:25:00.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552397 s, 7.4 MB/s 00:25:00.966 14:44:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.966 14:44:09 -- common/autotest_common.sh@872 -- # size=4096 00:25:00.966 14:44:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:00.966 14:44:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:00.966 14:44:09 -- common/autotest_common.sh@875 -- # return 0 00:25:00.966 14:44:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:00.966 14:44:09 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:00.966 14:44:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:25:01.238 /dev/nbd13 00:25:01.238 14:44:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:25:01.238 14:44:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:25:01.238 14:44:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:25:01.238 14:44:09 -- common/autotest_common.sh@855 -- # local i 00:25:01.238 14:44:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:01.238 14:44:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:01.238 14:44:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:25:01.238 14:44:09 -- common/autotest_common.sh@859 -- # break 00:25:01.238 14:44:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:01.238 14:44:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:01.238 14:44:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:01.238 1+0 records in 00:25:01.238 1+0 records out 00:25:01.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549537 s, 7.5 MB/s 00:25:01.238 14:44:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:01.512 14:44:09 -- common/autotest_common.sh@872 -- # size=4096 00:25:01.512 14:44:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:01.512 14:44:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:01.512 14:44:09 -- common/autotest_common.sh@875 -- # return 0 00:25:01.512 14:44:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:01.512 14:44:09 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:25:01.512 14:44:09 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:01.512 14:44:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:01.512 14:44:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd0", 00:25:01.844 "bdev_name": "nvme0n1" 00:25:01.844 }, 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd1", 00:25:01.844 "bdev_name": "nvme1n1" 00:25:01.844 }, 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd10", 00:25:01.844 "bdev_name": "nvme2n1" 00:25:01.844 }, 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd11", 00:25:01.844 "bdev_name": "nvme2n2" 00:25:01.844 }, 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd12", 00:25:01.844 "bdev_name": "nvme2n3" 00:25:01.844 }, 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd13", 00:25:01.844 "bdev_name": "nvme3n1" 00:25:01.844 } 00:25:01.844 ]' 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd0", 00:25:01.844 "bdev_name": "nvme0n1" 00:25:01.844 }, 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd1", 00:25:01.844 "bdev_name": "nvme1n1" 00:25:01.844 }, 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd10", 00:25:01.844 "bdev_name": "nvme2n1" 00:25:01.844 }, 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd11", 00:25:01.844 "bdev_name": "nvme2n2" 00:25:01.844 }, 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd12", 00:25:01.844 "bdev_name": "nvme2n3" 00:25:01.844 }, 00:25:01.844 { 00:25:01.844 "nbd_device": "/dev/nbd13", 00:25:01.844 "bdev_name": "nvme3n1" 00:25:01.844 } 00:25:01.844 ]' 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:25:01.844 /dev/nbd1 00:25:01.844 /dev/nbd10 00:25:01.844 /dev/nbd11 00:25:01.844 /dev/nbd12 00:25:01.844 /dev/nbd13' 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:25:01.844 /dev/nbd1 00:25:01.844 /dev/nbd10 00:25:01.844 /dev/nbd11 00:25:01.844 /dev/nbd12 00:25:01.844 /dev/nbd13' 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@65 -- # count=6 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@66 -- # echo 6 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@95 -- # count=6 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:01.844 256+0 records in 00:25:01.844 256+0 records out 00:25:01.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00868678 s, 121 MB/s 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:01.844 256+0 records in 00:25:01.844 256+0 records out 00:25:01.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138759 s, 7.6 MB/s 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:01.844 14:44:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:25:02.104 256+0 records in 00:25:02.104 256+0 records out 00:25:02.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140576 s, 7.5 MB/s 00:25:02.104 14:44:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:02.104 14:44:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:25:02.104 256+0 records in 00:25:02.104 256+0 records out 00:25:02.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132498 s, 7.9 MB/s 00:25:02.104 14:44:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:02.104 14:44:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:25:02.362 256+0 records in 00:25:02.362 256+0 records out 00:25:02.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131782 s, 8.0 MB/s 00:25:02.362 14:44:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:02.362 14:44:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:25:02.362 256+0 records in 00:25:02.362 256+0 records out 00:25:02.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130239 s, 8.1 MB/s 00:25:02.362 14:44:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:02.362 14:44:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:25:02.621 256+0 records in 00:25:02.621 256+0 records out 00:25:02.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132205 s, 7.9 MB/s 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@51 -- # local i 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:02.621 14:44:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:02.880 14:44:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:02.880 14:44:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:02.880 14:44:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:02.880 14:44:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:02.880 14:44:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:02.880 14:44:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:03.139 14:44:11 -- bdev/nbd_common.sh@41 -- # break 00:25:03.139 14:44:11 -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.139 14:44:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:03.139 14:44:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:03.398 14:44:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:03.398 14:44:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:03.398 14:44:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:03.398 14:44:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:03.398 14:44:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:03.398 14:44:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:03.398 14:44:11 -- bdev/nbd_common.sh@41 -- # break 00:25:03.398 14:44:11 -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.398 14:44:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:03.398 14:44:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:25:03.658 14:44:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:25:03.658 14:44:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:25:03.658 14:44:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:25:03.658 14:44:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:03.658 14:44:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:03.658 14:44:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:25:03.658 14:44:12 -- bdev/nbd_common.sh@41 -- # break 00:25:03.658 14:44:12 -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.658 14:44:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:03.658 14:44:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:25:03.917 14:44:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:25:03.917 14:44:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:25:03.917 14:44:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:25:03.917 14:44:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:03.917 14:44:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:03.917 14:44:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:25:03.917 14:44:12 -- bdev/nbd_common.sh@41 -- # break 00:25:03.917 14:44:12 -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.917 14:44:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:03.917 14:44:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:25:04.175 14:44:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:25:04.175 14:44:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:25:04.175 14:44:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:25:04.175 14:44:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:04.175 14:44:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:04.175 14:44:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:25:04.175 14:44:12 -- bdev/nbd_common.sh@41 -- # break 00:25:04.175 14:44:12 -- bdev/nbd_common.sh@45 -- # return 0 00:25:04.175 14:44:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:04.175 14:44:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:25:04.433 14:44:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:25:04.433 14:44:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:25:04.433 14:44:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:25:04.433 14:44:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:04.433 14:44:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:04.433 14:44:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:25:04.433 14:44:12 -- bdev/nbd_common.sh@41 -- # break 00:25:04.433 14:44:12 -- bdev/nbd_common.sh@45 -- # return 0 00:25:04.433 14:44:12 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:04.433 14:44:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:04.433 14:44:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:04.693 14:44:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:04.693 14:44:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:04.693 14:44:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:04.693 14:44:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:04.693 14:44:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:04.693 14:44:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:25:04.693 14:44:13 -- bdev/nbd_common.sh@65 -- # true 00:25:04.693 14:44:13 -- bdev/nbd_common.sh@65 -- # count=0 00:25:04.693 14:44:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:25:04.952 14:44:13 -- bdev/nbd_common.sh@104 -- # count=0 00:25:04.952 14:44:13 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:04.952 14:44:13 -- bdev/nbd_common.sh@109 -- # return 0 00:25:04.952 14:44:13 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:25:04.952 14:44:13 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:04.952 14:44:13 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:25:04.952 14:44:13 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:25:04.952 14:44:13 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:25:04.952 14:44:13 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:25:05.209 malloc_lvol_verify 00:25:05.209 14:44:13 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:25:05.468 fdcc09d5-a209-4bee-8230-0ab77380c123 00:25:05.468 14:44:13 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:25:05.726 a6cf8837-268a-4973-ae16-857932da0ebe 00:25:05.726 14:44:14 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:25:05.985 /dev/nbd0 00:25:05.985 14:44:14 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:25:05.985 mke2fs 1.46.5 (30-Dec-2021) 00:25:05.985 Discarding device blocks: 0/4096 done 00:25:05.985 Creating filesystem with 4096 1k blocks and 1024 inodes 00:25:05.985 00:25:05.985 Allocating group tables: 0/1 done 00:25:05.985 Writing inode tables: 0/1 done 00:25:05.985 Creating journal (1024 blocks): done 00:25:05.985 Writing superblocks and filesystem accounting information: 0/1 done 00:25:05.985 00:25:05.985 14:44:14 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:25:05.985 14:44:14 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:05.985 14:44:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:05.985 14:44:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:05.985 14:44:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:05.985 14:44:14 -- bdev/nbd_common.sh@51 -- # local i 00:25:05.985 14:44:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:05.985 14:44:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:06.243 14:44:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:06.243 14:44:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:06.243 14:44:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:06.243 14:44:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:06.243 14:44:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:06.243 14:44:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:06.243 14:44:14 -- bdev/nbd_common.sh@41 -- # break 00:25:06.243 14:44:14 -- bdev/nbd_common.sh@45 -- # return 0 00:25:06.243 14:44:14 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:25:06.243 14:44:14 -- bdev/nbd_common.sh@147 -- # return 0 00:25:06.243 14:44:14 -- bdev/blockdev.sh@326 -- # killprocess 74570 00:25:06.243 14:44:14 -- common/autotest_common.sh@936 -- # '[' -z 74570 ']' 00:25:06.243 14:44:14 -- common/autotest_common.sh@940 -- # kill -0 74570 00:25:06.243 14:44:14 -- common/autotest_common.sh@941 -- # uname 00:25:06.243 14:44:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:06.243 14:44:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74570 00:25:06.243 14:44:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:06.243 killing process with pid 74570 00:25:06.243 14:44:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:06.243 14:44:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74570' 00:25:06.243 14:44:14 -- common/autotest_common.sh@955 -- # kill 74570 00:25:06.243 14:44:14 -- common/autotest_common.sh@960 -- # wait 74570 00:25:08.173 14:44:16 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:25:08.173 00:25:08.173 real 0m13.506s 00:25:08.173 user 0m18.073s 00:25:08.173 sys 0m5.111s 00:25:08.173 14:44:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:08.173 14:44:16 -- common/autotest_common.sh@10 -- # set +x 00:25:08.173 ************************************ 00:25:08.173 END TEST bdev_nbd 00:25:08.173 ************************************ 00:25:08.173 14:44:16 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:25:08.173 14:44:16 -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:25:08.173 14:44:16 -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:25:08.173 14:44:16 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:25:08.173 14:44:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:08.173 14:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:08.173 14:44:16 -- common/autotest_common.sh@10 -- # set +x 00:25:08.173 ************************************ 00:25:08.173 START TEST bdev_fio 00:25:08.173 ************************************ 00:25:08.173 14:44:16 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:25:08.173 14:44:16 -- bdev/blockdev.sh@331 -- # local env_context 00:25:08.173 14:44:16 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:25:08.173 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:25:08.173 14:44:16 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:25:08.173 14:44:16 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:25:08.173 14:44:16 -- bdev/blockdev.sh@339 -- # echo '' 00:25:08.173 14:44:16 -- bdev/blockdev.sh@339 -- # env_context= 00:25:08.173 14:44:16 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:25:08.173 14:44:16 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:08.173 14:44:16 -- common/autotest_common.sh@1267 -- # local workload=verify 00:25:08.173 14:44:16 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:25:08.173 14:44:16 -- common/autotest_common.sh@1269 -- # local env_context= 00:25:08.173 14:44:16 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:25:08.173 14:44:16 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:25:08.173 14:44:16 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:25:08.173 14:44:16 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:25:08.173 14:44:16 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:08.173 14:44:16 -- common/autotest_common.sh@1287 -- # cat 00:25:08.173 14:44:16 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:25:08.173 14:44:16 -- common/autotest_common.sh@1300 -- # cat 00:25:08.173 14:44:16 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:25:08.173 14:44:16 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:25:08.173 14:44:16 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:25:08.174 14:44:16 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:25:08.174 14:44:16 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:25:08.174 14:44:16 -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:25:08.174 14:44:16 -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:25:08.174 14:44:16 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:25:08.174 14:44:16 -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:25:08.174 14:44:16 -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:25:08.174 14:44:16 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:25:08.174 14:44:16 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:25:08.174 14:44:16 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:25:08.174 14:44:16 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:25:08.174 14:44:16 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:25:08.174 14:44:16 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:25:08.174 14:44:16 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:25:08.174 14:44:16 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:25:08.174 14:44:16 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:25:08.174 14:44:16 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:25:08.174 14:44:16 -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:25:08.174 14:44:16 -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:25:08.174 14:44:16 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:25:08.174 14:44:16 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:08.174 14:44:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:25:08.174 14:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:08.174 14:44:16 -- common/autotest_common.sh@10 -- # set +x 00:25:08.174 ************************************ 00:25:08.174 START TEST bdev_fio_rw_verify 00:25:08.174 ************************************ 00:25:08.174 14:44:16 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:08.174 14:44:16 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:08.174 14:44:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:08.174 14:44:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:08.174 14:44:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:08.174 14:44:16 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.174 14:44:16 -- common/autotest_common.sh@1327 -- # shift 00:25:08.174 14:44:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:08.174 14:44:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.174 14:44:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.174 14:44:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:08.174 14:44:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:08.174 14:44:16 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:08.174 14:44:16 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:08.174 14:44:16 -- common/autotest_common.sh@1333 -- # break 00:25:08.174 14:44:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:08.174 14:44:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:08.174 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:08.174 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:08.174 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:08.174 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:08.174 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:08.174 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:08.174 fio-3.35 00:25:08.174 Starting 6 threads 00:25:20.392 00:25:20.392 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75019: Wed Apr 17 14:44:27 2024 00:25:20.392 read: IOPS=27.1k, BW=106MiB/s (111MB/s)(1058MiB/10001msec) 00:25:20.392 slat (usec): min=2, max=827, avg= 6.58, stdev= 5.24 00:25:20.392 clat (usec): min=125, max=13605, avg=695.15, stdev=296.69 00:25:20.392 lat (usec): min=129, max=13611, avg=701.73, stdev=297.37 00:25:20.392 clat percentiles (usec): 00:25:20.392 | 50.000th=[ 693], 99.000th=[ 1467], 99.900th=[ 2704], 99.990th=[ 4555], 00:25:20.392 | 99.999th=[13566] 00:25:20.392 write: IOPS=27.5k, BW=107MiB/s (113MB/s)(1074MiB/10001msec); 0 zone resets 00:25:20.392 slat (usec): min=12, max=8755, avg=27.11, stdev=44.78 00:25:20.392 clat (usec): min=70, max=37253, avg=778.69, stdev=433.95 00:25:20.392 lat (usec): min=85, max=37278, avg=805.80, stdev=437.65 00:25:20.392 clat percentiles (usec): 00:25:20.392 | 50.000th=[ 758], 99.000th=[ 1663], 99.900th=[ 3556], 99.990th=[ 9634], 00:25:20.392 | 99.999th=[36963] 00:25:20.392 bw ( KiB/s): min=93688, max=144352, per=98.59%, avg=108418.11, stdev=2359.87, samples=114 00:25:20.392 iops : min=23422, max=36088, avg=27104.53, stdev=589.97, samples=114 00:25:20.392 lat (usec) : 100=0.01%, 250=2.94%, 500=18.05%, 750=32.47%, 1000=31.68% 00:25:20.392 lat (msec) : 2=14.53%, 4=0.29%, 10=0.04%, 20=0.01%, 50=0.01% 00:25:20.392 cpu : usr=55.12%, sys=30.36%, ctx=9014, majf=0, minf=23445 00:25:20.392 IO depths : 1=12.1%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:20.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.392 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.392 issued rwts: total=270732,274947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.392 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:20.392 00:25:20.392 Run status group 0 (all jobs): 00:25:20.392 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=1058MiB (1109MB), run=10001-10001msec 00:25:20.392 WRITE: bw=107MiB/s (113MB/s), 107MiB/s-107MiB/s (113MB/s-113MB/s), io=1074MiB (1126MB), run=10001-10001msec 00:25:21.027 ----------------------------------------------------- 00:25:21.027 Suppressions used: 00:25:21.027 count bytes template 00:25:21.027 6 48 /usr/src/fio/parse.c 00:25:21.027 4021 386016 /usr/src/fio/iolog.c 00:25:21.027 1 8 libtcmalloc_minimal.so 00:25:21.027 1 904 libcrypto.so 00:25:21.027 ----------------------------------------------------- 00:25:21.027 00:25:21.027 00:25:21.027 real 0m12.829s 00:25:21.027 user 0m35.465s 00:25:21.027 sys 0m18.670s 00:25:21.027 ************************************ 00:25:21.027 END TEST bdev_fio_rw_verify 00:25:21.027 ************************************ 00:25:21.027 14:44:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:21.027 14:44:29 -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 14:44:29 -- bdev/blockdev.sh@350 -- # rm -f 00:25:21.027 14:44:29 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:21.027 14:44:29 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:25:21.027 14:44:29 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:21.027 14:44:29 -- common/autotest_common.sh@1267 -- # local workload=trim 00:25:21.027 14:44:29 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:25:21.027 14:44:29 -- common/autotest_common.sh@1269 -- # local env_context= 00:25:21.027 14:44:29 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:25:21.027 14:44:29 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:25:21.027 14:44:29 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:25:21.027 14:44:29 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:25:21.027 14:44:29 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:21.027 14:44:29 -- common/autotest_common.sh@1287 -- # cat 00:25:21.027 14:44:29 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:25:21.027 14:44:29 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:25:21.027 14:44:29 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:25:21.027 14:44:29 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:25:21.027 14:44:29 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "940e8dd1-394a-49bc-a006-7fc2c64d7250"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "940e8dd1-394a-49bc-a006-7fc2c64d7250",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b376d1b7-9d11-4602-ab4d-e1805ae2ff58"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b376d1b7-9d11-4602-ab4d-e1805ae2ff58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d371a320-6a92-4beb-a597-78c9db6cd0eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d371a320-6a92-4beb-a597-78c9db6cd0eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "f9fe658c-46e2-4c62-9723-0d48b52ec728"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f9fe658c-46e2-4c62-9723-0d48b52ec728",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "005a40ee-12b5-4044-9967-17ee86edd5ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "005a40ee-12b5-4044-9967-17ee86edd5ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9ec5ef6e-3326-488d-b20d-687f5cc964b3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9ec5ef6e-3326-488d-b20d-687f5cc964b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:25:21.027 14:44:29 -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:25:21.027 14:44:29 -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:21.027 14:44:29 -- bdev/blockdev.sh@362 -- # popd 00:25:21.027 /home/vagrant/spdk_repo/spdk 00:25:21.027 14:44:29 -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:25:21.027 14:44:29 -- bdev/blockdev.sh@364 -- # return 0 00:25:21.027 00:25:21.027 real 0m13.067s 00:25:21.027 user 0m35.568s 00:25:21.027 sys 0m18.795s 00:25:21.027 14:44:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:21.027 14:44:29 -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 ************************************ 00:25:21.027 END TEST bdev_fio 00:25:21.027 ************************************ 00:25:21.027 14:44:29 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:21.027 14:44:29 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:21.027 14:44:29 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:25:21.027 14:44:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:21.027 14:44:29 -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 ************************************ 00:25:21.027 START TEST bdev_verify 00:25:21.027 ************************************ 00:25:21.027 14:44:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:21.286 [2024-04-17 14:44:29.649733] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:25:21.286 [2024-04-17 14:44:29.649903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75201 ] 00:25:21.286 [2024-04-17 14:44:29.819606] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:21.547 [2024-04-17 14:44:30.077885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.547 [2024-04-17 14:44:30.077907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.547 [2024-04-17 14:44:30.128047] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:25:22.115 [2024-04-17 14:44:30.612745] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:25:22.115 Running I/O for 5 seconds... 00:25:27.490 00:25:27.490 Latency(us) 00:25:27.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.490 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:27.490 Verification LBA range: start 0x0 length 0xa0000 00:25:27.490 nvme0n1 : 5.03 1680.79 6.57 0.00 0.00 76032.12 13606.52 81888.79 00:25:27.490 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:27.490 Verification LBA range: start 0xa0000 length 0xa0000 00:25:27.490 nvme0n1 : 5.03 1706.50 6.67 0.00 0.00 74877.91 12545.46 70404.39 00:25:27.490 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:27.490 Verification LBA range: start 0x0 length 0xbd0bd 00:25:27.490 nvme1n1 : 5.05 2979.53 11.64 0.00 0.00 42762.14 4868.39 76895.57 00:25:27.490 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:27.490 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:25:27.490 nvme1n1 : 5.06 2949.95 11.52 0.00 0.00 43170.26 4649.94 68407.10 00:25:27.490 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:27.491 Verification LBA range: start 0x0 length 0x80000 00:25:27.491 nvme2n1 : 5.04 1725.38 6.74 0.00 0.00 73793.60 8176.40 79392.18 00:25:27.491 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:27.491 Verification LBA range: start 0x80000 length 0x80000 00:25:27.491 nvme2n1 : 5.06 1719.94 6.72 0.00 0.00 73970.76 7396.21 83386.76 00:25:27.491 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:27.491 Verification LBA range: start 0x0 length 0x80000 00:25:27.491 nvme2n2 : 5.05 1699.52 6.64 0.00 0.00 74814.21 8800.55 74898.29 00:25:27.491 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:27.491 Verification LBA range: start 0x80000 length 0x80000 00:25:27.491 nvme2n2 : 5.06 1721.16 6.72 0.00 0.00 73786.56 7333.79 83386.76 00:25:27.491 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:27.491 Verification LBA range: start 0x0 length 0x80000 00:25:27.491 nvme2n3 : 5.05 1698.42 6.63 0.00 0.00 74752.87 7333.79 68906.42 00:25:27.491 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:27.491 Verification LBA range: start 0x80000 length 0x80000 00:25:27.491 nvme2n3 : 5.06 1719.40 6.72 0.00 0.00 73734.12 7708.28 71902.35 00:25:27.491 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:27.491 Verification LBA range: start 0x0 length 0x20000 00:25:27.491 nvme3n1 : 5.05 1697.91 6.63 0.00 0.00 74663.29 8051.57 71403.03 00:25:27.491 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:27.491 Verification LBA range: start 0x20000 length 0x20000 00:25:27.491 nvme3n1 : 5.06 1720.51 6.72 0.00 0.00 73596.19 7864.32 64911.85 00:25:27.491 =================================================================================================================== 00:25:27.491 Total : 23019.00 89.92 0.00 0.00 66295.36 4649.94 83386.76 00:25:28.890 00:25:28.890 real 0m7.596s 00:25:28.890 user 0m11.595s 00:25:28.890 sys 0m2.028s 00:25:28.890 14:44:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:28.890 14:44:37 -- common/autotest_common.sh@10 -- # set +x 00:25:28.890 ************************************ 00:25:28.890 END TEST bdev_verify 00:25:28.890 ************************************ 00:25:28.890 14:44:37 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:28.890 14:44:37 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:25:28.890 14:44:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:28.890 14:44:37 -- common/autotest_common.sh@10 -- # set +x 00:25:28.890 ************************************ 00:25:28.890 START TEST bdev_verify_big_io 00:25:28.890 ************************************ 00:25:28.890 14:44:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:28.890 [2024-04-17 14:44:37.431187] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:25:28.890 [2024-04-17 14:44:37.431538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75305 ] 00:25:29.149 [2024-04-17 14:44:37.595022] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:29.482 [2024-04-17 14:44:37.867704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.482 [2024-04-17 14:44:37.867709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.482 [2024-04-17 14:44:37.917788] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:25:30.053 [2024-04-17 14:44:38.414631] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:25:30.053 Running I/O for 5 seconds... 00:25:36.616 00:25:36.616 Latency(us) 00:25:36.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.616 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:36.616 Verification LBA range: start 0x0 length 0xa000 00:25:36.616 nvme0n1 : 5.93 107.89 6.74 0.00 0.00 1159988.76 99365.06 1390112.18 00:25:36.616 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:36.616 Verification LBA range: start 0xa000 length 0xa000 00:25:36.616 nvme0n1 : 5.97 83.13 5.20 0.00 0.00 1483986.71 185747.75 2492614.95 00:25:36.616 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:36.616 Verification LBA range: start 0x0 length 0xbd0b 00:25:36.616 nvme1n1 : 5.95 150.51 9.41 0.00 0.00 808404.39 81888.79 774947.60 00:25:36.616 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:36.616 Verification LBA range: start 0xbd0b length 0xbd0b 00:25:36.616 nvme1n1 : 5.97 171.55 10.72 0.00 0.00 698233.17 68906.42 738996.42 00:25:36.616 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:36.616 Verification LBA range: start 0x0 length 0x8000 00:25:36.616 nvme2n1 : 5.96 107.42 6.71 0.00 0.00 1104518.00 98366.42 1046578.71 00:25:36.616 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:36.617 Verification LBA range: start 0x8000 length 0x8000 00:25:36.617 nvme2n1 : 5.93 163.31 10.21 0.00 0.00 714274.05 108352.85 754974.72 00:25:36.617 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:36.617 Verification LBA range: start 0x0 length 0x8000 00:25:36.617 nvme2n2 : 5.96 149.12 9.32 0.00 0.00 774735.54 11734.06 754974.72 00:25:36.617 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:36.617 Verification LBA range: start 0x8000 length 0x8000 00:25:36.617 nvme2n2 : 5.98 117.73 7.36 0.00 0.00 970711.93 46936.26 1765602.26 00:25:36.617 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:36.617 Verification LBA range: start 0x0 length 0x8000 00:25:36.617 nvme2n3 : 5.95 115.71 7.23 0.00 0.00 967179.27 91875.23 1557884.34 00:25:36.617 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:36.617 Verification LBA range: start 0x8000 length 0x8000 00:25:36.617 nvme2n3 : 5.97 109.85 6.87 0.00 0.00 1010816.02 31332.45 2220983.83 00:25:36.617 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:36.617 Verification LBA range: start 0x0 length 0x2000 00:25:36.617 nvme3n1 : 5.96 169.12 10.57 0.00 0.00 646872.24 8488.47 966687.21 00:25:36.617 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:36.617 Verification LBA range: start 0x2000 length 0x2000 00:25:36.617 nvme3n1 : 5.98 147.07 9.19 0.00 0.00 735026.77 6085.49 1454025.39 00:25:36.617 =================================================================================================================== 00:25:36.617 Total : 1592.41 99.53 0.00 0.00 876350.33 6085.49 2492614.95 00:25:37.990 00:25:37.990 real 0m9.165s 00:25:37.990 user 0m16.246s 00:25:37.990 sys 0m0.639s 00:25:37.990 14:44:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:37.990 14:44:46 -- common/autotest_common.sh@10 -- # set +x 00:25:37.990 ************************************ 00:25:37.990 END TEST bdev_verify_big_io 00:25:37.990 ************************************ 00:25:37.990 14:44:46 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:37.990 14:44:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:25:37.990 14:44:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:37.990 14:44:46 -- common/autotest_common.sh@10 -- # set +x 00:25:37.990 ************************************ 00:25:37.990 START TEST bdev_write_zeroes 00:25:37.990 ************************************ 00:25:37.990 14:44:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:38.247 [2024-04-17 14:44:46.651358] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:25:38.247 [2024-04-17 14:44:46.651753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75431 ] 00:25:38.247 [2024-04-17 14:44:46.820631] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.507 [2024-04-17 14:44:47.100995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.764 [2024-04-17 14:44:47.150948] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:25:39.330 [2024-04-17 14:44:47.641572] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:25:39.330 Running I/O for 1 seconds... 00:25:40.266 00:25:40.266 Latency(us) 00:25:40.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.266 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:40.266 nvme0n1 : 1.02 11182.33 43.68 0.00 0.00 11435.07 7177.75 22469.49 00:25:40.266 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:40.266 nvme1n1 : 1.02 17256.80 67.41 0.00 0.00 7403.23 3807.33 17101.78 00:25:40.266 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:40.266 nvme2n1 : 1.02 11168.26 43.63 0.00 0.00 11389.30 7146.54 21720.50 00:25:40.266 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:40.266 nvme2n2 : 1.02 11155.31 43.58 0.00 0.00 11394.91 7146.54 20472.20 00:25:40.266 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:40.266 nvme2n3 : 1.02 11143.17 43.53 0.00 0.00 11398.21 7177.75 19723.22 00:25:40.266 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:40.266 nvme3n1 : 1.02 11131.45 43.48 0.00 0.00 11402.40 7177.75 21470.84 00:25:40.266 =================================================================================================================== 00:25:40.266 Total : 73037.31 285.30 0.00 0.00 10462.22 3807.33 22469.49 00:25:41.650 00:25:41.650 real 0m3.614s 00:25:41.650 user 0m2.765s 00:25:41.650 sys 0m0.683s 00:25:41.650 14:44:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:41.650 ************************************ 00:25:41.650 END TEST bdev_write_zeroes 00:25:41.650 ************************************ 00:25:41.650 14:44:50 -- common/autotest_common.sh@10 -- # set +x 00:25:41.650 14:44:50 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:41.650 14:44:50 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:25:41.650 14:44:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:41.650 14:44:50 -- common/autotest_common.sh@10 -- # set +x 00:25:41.909 ************************************ 00:25:41.909 START TEST bdev_json_nonenclosed 00:25:41.909 ************************************ 00:25:41.909 14:44:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:41.909 [2024-04-17 14:44:50.447185] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:25:41.909 [2024-04-17 14:44:50.447723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75504 ] 00:25:42.185 [2024-04-17 14:44:50.631675] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.444 [2024-04-17 14:44:50.905192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.444 [2024-04-17 14:44:50.905292] json_config.c: 582:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:42.444 [2024-04-17 14:44:50.905322] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:42.444 [2024-04-17 14:44:50.905336] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:43.013 00:25:43.013 real 0m1.159s 00:25:43.013 user 0m0.849s 00:25:43.013 sys 0m0.198s 00:25:43.013 14:44:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:43.013 14:44:51 -- common/autotest_common.sh@10 -- # set +x 00:25:43.013 ************************************ 00:25:43.013 END TEST bdev_json_nonenclosed 00:25:43.013 ************************************ 00:25:43.013 14:44:51 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:43.013 14:44:51 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:25:43.013 14:44:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:43.013 14:44:51 -- common/autotest_common.sh@10 -- # set +x 00:25:43.013 ************************************ 00:25:43.013 START TEST bdev_json_nonarray 00:25:43.013 ************************************ 00:25:43.013 14:44:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:43.271 [2024-04-17 14:44:51.714462] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:25:43.271 [2024-04-17 14:44:51.714868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75540 ] 00:25:43.530 [2024-04-17 14:44:51.901950] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.789 [2024-04-17 14:44:52.220827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.789 [2024-04-17 14:44:52.220966] json_config.c: 588:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:43.789 [2024-04-17 14:44:52.221004] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:43.789 [2024-04-17 14:44:52.221019] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:44.355 00:25:44.355 real 0m1.160s 00:25:44.355 user 0m0.857s 00:25:44.355 sys 0m0.193s 00:25:44.355 14:44:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:44.355 14:44:52 -- common/autotest_common.sh@10 -- # set +x 00:25:44.355 ************************************ 00:25:44.355 END TEST bdev_json_nonarray 00:25:44.355 ************************************ 00:25:44.355 14:44:52 -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:25:44.355 14:44:52 -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:25:44.355 14:44:52 -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:25:44.355 14:44:52 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:25:44.355 14:44:52 -- bdev/blockdev.sh@811 -- # cleanup 00:25:44.355 14:44:52 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:44.355 14:44:52 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:44.355 14:44:52 -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:25:44.355 14:44:52 -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:25:44.355 14:44:52 -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:25:44.355 14:44:52 -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:25:44.355 14:44:52 -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:44.924 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:45.492 lsblk: /dev/nvme3c3n1: not a block device 00:25:48.023 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:48.023 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:48.281 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:48.281 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:48.539 00:25:48.539 real 1m12.004s 00:25:48.539 user 1m48.704s 00:25:48.539 sys 0m39.168s 00:25:48.539 14:44:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:48.539 ************************************ 00:25:48.539 END TEST blockdev_xnvme 00:25:48.539 14:44:56 -- common/autotest_common.sh@10 -- # set +x 00:25:48.539 ************************************ 00:25:48.539 14:44:56 -- spdk/autotest.sh@248 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:25:48.539 14:44:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:48.539 14:44:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:48.539 14:44:56 -- common/autotest_common.sh@10 -- # set +x 00:25:48.539 ************************************ 00:25:48.539 START TEST ublk 00:25:48.539 ************************************ 00:25:48.539 14:44:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:25:48.539 * Looking for test storage... 00:25:48.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:25:48.539 14:44:57 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:25:48.539 14:44:57 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:25:48.539 14:44:57 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:25:48.539 14:44:57 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:25:48.539 14:44:57 -- lvol/common.sh@9 -- # AIO_BS=4096 00:25:48.539 14:44:57 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:25:48.539 14:44:57 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:25:48.539 14:44:57 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:25:48.539 14:44:57 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:25:48.539 14:44:57 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:25:48.539 14:44:57 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:25:48.539 14:44:57 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:25:48.539 14:44:57 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:25:48.539 14:44:57 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:25:48.539 14:44:57 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:25:48.539 14:44:57 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:25:48.539 14:44:57 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:25:48.539 14:44:57 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:25:48.539 14:44:57 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:25:48.798 14:44:57 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:25:48.798 14:44:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:48.798 14:44:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:48.798 14:44:57 -- common/autotest_common.sh@10 -- # set +x 00:25:48.798 ************************************ 00:25:48.798 START TEST test_save_ublk_config 00:25:48.798 ************************************ 00:25:48.798 14:44:57 -- common/autotest_common.sh@1111 -- # test_save_config 00:25:48.798 14:44:57 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:25:48.798 14:44:57 -- ublk/ublk.sh@103 -- # tgtpid=75875 00:25:48.798 14:44:57 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:25:48.798 14:44:57 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:25:48.798 14:44:57 -- ublk/ublk.sh@106 -- # waitforlisten 75875 00:25:48.798 14:44:57 -- common/autotest_common.sh@817 -- # '[' -z 75875 ']' 00:25:48.798 14:44:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.798 14:44:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:48.798 14:44:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.798 14:44:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:48.798 14:44:57 -- common/autotest_common.sh@10 -- # set +x 00:25:48.798 [2024-04-17 14:44:57.335098] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:25:48.798 [2024-04-17 14:44:57.335800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75875 ] 00:25:49.128 [2024-04-17 14:44:57.507794] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.388 [2024-04-17 14:44:57.782247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.768 14:44:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:50.768 14:44:58 -- common/autotest_common.sh@850 -- # return 0 00:25:50.768 14:44:58 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:25:50.768 14:44:58 -- ublk/ublk.sh@108 -- # rpc_cmd 00:25:50.768 14:44:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:50.768 14:44:58 -- common/autotest_common.sh@10 -- # set +x 00:25:50.768 [2024-04-17 14:44:58.959017] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:50.768 malloc0 00:25:50.768 [2024-04-17 14:44:59.068330] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:25:50.768 [2024-04-17 14:44:59.068450] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:25:50.768 [2024-04-17 14:44:59.068462] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:50.768 [2024-04-17 14:44:59.068475] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:50.768 [2024-04-17 14:44:59.091544] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:50.768 [2024-04-17 14:44:59.091610] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:50.768 [2024-04-17 14:44:59.117524] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:50.768 [2024-04-17 14:44:59.117717] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:50.768 [2024-04-17 14:44:59.167157] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:50.768 0 00:25:50.768 14:44:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:50.768 14:44:59 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:25:50.768 14:44:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:50.768 14:44:59 -- common/autotest_common.sh@10 -- # set +x 00:25:51.028 14:44:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.028 14:44:59 -- ublk/ublk.sh@115 -- # config='{ 00:25:51.028 "subsystems": [ 00:25:51.028 { 00:25:51.028 "subsystem": "keyring", 00:25:51.028 "config": [] 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "subsystem": "iobuf", 00:25:51.028 "config": [ 00:25:51.028 { 00:25:51.028 "method": "iobuf_set_options", 00:25:51.028 "params": { 00:25:51.028 "small_pool_count": 8192, 00:25:51.028 "large_pool_count": 1024, 00:25:51.028 "small_bufsize": 8192, 00:25:51.028 "large_bufsize": 135168 00:25:51.028 } 00:25:51.028 } 00:25:51.028 ] 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "subsystem": "sock", 00:25:51.028 "config": [ 00:25:51.028 { 00:25:51.028 "method": "sock_impl_set_options", 00:25:51.028 "params": { 00:25:51.028 "impl_name": "posix", 00:25:51.028 "recv_buf_size": 2097152, 00:25:51.028 "send_buf_size": 2097152, 00:25:51.028 "enable_recv_pipe": true, 00:25:51.028 "enable_quickack": false, 00:25:51.028 "enable_placement_id": 0, 00:25:51.028 "enable_zerocopy_send_server": true, 00:25:51.028 "enable_zerocopy_send_client": false, 00:25:51.028 "zerocopy_threshold": 0, 00:25:51.028 "tls_version": 0, 00:25:51.028 "enable_ktls": false 00:25:51.028 } 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "method": "sock_impl_set_options", 00:25:51.028 "params": { 00:25:51.028 "impl_name": "ssl", 00:25:51.028 "recv_buf_size": 4096, 00:25:51.028 "send_buf_size": 4096, 00:25:51.028 "enable_recv_pipe": true, 00:25:51.028 "enable_quickack": false, 00:25:51.028 "enable_placement_id": 0, 00:25:51.028 "enable_zerocopy_send_server": true, 00:25:51.028 "enable_zerocopy_send_client": false, 00:25:51.028 "zerocopy_threshold": 0, 00:25:51.028 "tls_version": 0, 00:25:51.028 "enable_ktls": false 00:25:51.028 } 00:25:51.028 } 00:25:51.028 ] 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "subsystem": "vmd", 00:25:51.028 "config": [] 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "subsystem": "accel", 00:25:51.028 "config": [ 00:25:51.028 { 00:25:51.028 "method": "accel_set_options", 00:25:51.028 "params": { 00:25:51.028 "small_cache_size": 128, 00:25:51.028 "large_cache_size": 16, 00:25:51.028 "task_count": 2048, 00:25:51.028 "sequence_count": 2048, 00:25:51.028 "buf_count": 2048 00:25:51.028 } 00:25:51.028 } 00:25:51.028 ] 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "subsystem": "bdev", 00:25:51.028 "config": [ 00:25:51.028 { 00:25:51.028 "method": "bdev_set_options", 00:25:51.028 "params": { 00:25:51.028 "bdev_io_pool_size": 65535, 00:25:51.028 "bdev_io_cache_size": 256, 00:25:51.028 "bdev_auto_examine": true, 00:25:51.028 "iobuf_small_cache_size": 128, 00:25:51.028 "iobuf_large_cache_size": 16 00:25:51.028 } 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "method": "bdev_raid_set_options", 00:25:51.028 "params": { 00:25:51.028 "process_window_size_kb": 1024 00:25:51.028 } 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "method": "bdev_iscsi_set_options", 00:25:51.028 "params": { 00:25:51.028 "timeout_sec": 30 00:25:51.028 } 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "method": "bdev_nvme_set_options", 00:25:51.028 "params": { 00:25:51.028 "action_on_timeout": "none", 00:25:51.028 "timeout_us": 0, 00:25:51.028 "timeout_admin_us": 0, 00:25:51.028 "keep_alive_timeout_ms": 10000, 00:25:51.028 "arbitration_burst": 0, 00:25:51.028 "low_priority_weight": 0, 00:25:51.028 "medium_priority_weight": 0, 00:25:51.028 "high_priority_weight": 0, 00:25:51.028 "nvme_adminq_poll_period_us": 10000, 00:25:51.028 "nvme_ioq_poll_period_us": 0, 00:25:51.028 "io_queue_requests": 0, 00:25:51.028 "delay_cmd_submit": true, 00:25:51.028 "transport_retry_count": 4, 00:25:51.028 "bdev_retry_count": 3, 00:25:51.028 "transport_ack_timeout": 0, 00:25:51.028 "ctrlr_loss_timeout_sec": 0, 00:25:51.028 "reconnect_delay_sec": 0, 00:25:51.028 "fast_io_fail_timeout_sec": 0, 00:25:51.028 "disable_auto_failback": false, 00:25:51.028 "generate_uuids": false, 00:25:51.028 "transport_tos": 0, 00:25:51.028 "nvme_error_stat": false, 00:25:51.028 "rdma_srq_size": 0, 00:25:51.028 "io_path_stat": false, 00:25:51.028 "allow_accel_sequence": false, 00:25:51.028 "rdma_max_cq_size": 0, 00:25:51.028 "rdma_cm_event_timeout_ms": 0, 00:25:51.028 "dhchap_digests": [ 00:25:51.028 "sha256", 00:25:51.028 "sha384", 00:25:51.028 "sha512" 00:25:51.028 ], 00:25:51.028 "dhchap_dhgroups": [ 00:25:51.028 "null", 00:25:51.028 "ffdhe2048", 00:25:51.028 "ffdhe3072", 00:25:51.028 "ffdhe4096", 00:25:51.028 "ffdhe6144", 00:25:51.028 "ffdhe8192" 00:25:51.028 ] 00:25:51.028 } 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "method": "bdev_nvme_set_hotplug", 00:25:51.028 "params": { 00:25:51.028 "period_us": 100000, 00:25:51.028 "enable": false 00:25:51.028 } 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "method": "bdev_malloc_create", 00:25:51.028 "params": { 00:25:51.028 "name": "malloc0", 00:25:51.028 "num_blocks": 8192, 00:25:51.028 "block_size": 4096, 00:25:51.028 "physical_block_size": 4096, 00:25:51.028 "uuid": "aac4e4b8-bda5-44b6-b49f-67ec108b3e91", 00:25:51.028 "optimal_io_boundary": 0 00:25:51.028 } 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "method": "bdev_wait_for_examine" 00:25:51.028 } 00:25:51.028 ] 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "subsystem": "scsi", 00:25:51.028 "config": null 00:25:51.028 }, 00:25:51.028 { 00:25:51.028 "subsystem": "scheduler", 00:25:51.028 "config": [ 00:25:51.029 { 00:25:51.029 "method": "framework_set_scheduler", 00:25:51.029 "params": { 00:25:51.029 "name": "static" 00:25:51.029 } 00:25:51.029 } 00:25:51.029 ] 00:25:51.029 }, 00:25:51.029 { 00:25:51.029 "subsystem": "vhost_scsi", 00:25:51.029 "config": [] 00:25:51.029 }, 00:25:51.029 { 00:25:51.029 "subsystem": "vhost_blk", 00:25:51.029 "config": [] 00:25:51.029 }, 00:25:51.029 { 00:25:51.029 "subsystem": "ublk", 00:25:51.029 "config": [ 00:25:51.029 { 00:25:51.029 "method": "ublk_create_target", 00:25:51.029 "params": { 00:25:51.029 "cpumask": "1" 00:25:51.029 } 00:25:51.029 }, 00:25:51.029 { 00:25:51.029 "method": "ublk_start_disk", 00:25:51.029 "params": { 00:25:51.029 "bdev_name": "malloc0", 00:25:51.029 "ublk_id": 0, 00:25:51.029 "num_queues": 1, 00:25:51.029 "queue_depth": 128 00:25:51.029 } 00:25:51.029 } 00:25:51.029 ] 00:25:51.029 }, 00:25:51.029 { 00:25:51.029 "subsystem": "nbd", 00:25:51.029 "config": [] 00:25:51.029 }, 00:25:51.029 { 00:25:51.029 "subsystem": "nvmf", 00:25:51.029 "config": [ 00:25:51.029 { 00:25:51.029 "method": "nvmf_set_config", 00:25:51.029 "params": { 00:25:51.029 "discovery_filter": "match_any", 00:25:51.029 "admin_cmd_passthru": { 00:25:51.029 "identify_ctrlr": false 00:25:51.029 } 00:25:51.029 } 00:25:51.029 }, 00:25:51.029 { 00:25:51.029 "method": "nvmf_set_max_subsystems", 00:25:51.029 "params": { 00:25:51.029 "max_subsystems": 1024 00:25:51.029 } 00:25:51.029 }, 00:25:51.029 { 00:25:51.029 "method": "nvmf_set_crdt", 00:25:51.029 "params": { 00:25:51.029 "crdt1": 0, 00:25:51.029 "crdt2": 0, 00:25:51.029 "crdt3": 0 00:25:51.029 } 00:25:51.029 } 00:25:51.029 ] 00:25:51.029 }, 00:25:51.029 { 00:25:51.029 "subsystem": "iscsi", 00:25:51.029 "config": [ 00:25:51.029 { 00:25:51.029 "method": "iscsi_set_options", 00:25:51.029 "params": { 00:25:51.029 "node_base": "iqn.2016-06.io.spdk", 00:25:51.029 "max_sessions": 128, 00:25:51.029 "max_connections_per_session": 2, 00:25:51.029 "max_queue_depth": 64, 00:25:51.029 "default_time2wait": 2, 00:25:51.029 "default_time2retain": 20, 00:25:51.029 "first_burst_length": 8192, 00:25:51.029 "immediate_data": true, 00:25:51.029 "allow_duplicated_isid": false, 00:25:51.029 "error_recovery_level": 0, 00:25:51.029 "nop_timeout": 60, 00:25:51.029 "nop_in_interval": 30, 00:25:51.029 "disable_chap": false, 00:25:51.029 "require_chap": false, 00:25:51.029 "mutual_chap": false, 00:25:51.029 "chap_group": 0, 00:25:51.029 "max_large_datain_per_connection": 64, 00:25:51.029 "max_r2t_per_connection": 4, 00:25:51.029 "pdu_pool_size": 36864, 00:25:51.029 "immediate_data_pool_size": 16384, 00:25:51.029 "data_out_pool_size": 2048 00:25:51.029 } 00:25:51.029 } 00:25:51.029 ] 00:25:51.029 } 00:25:51.029 ] 00:25:51.029 }' 00:25:51.029 14:44:59 -- ublk/ublk.sh@116 -- # killprocess 75875 00:25:51.029 14:44:59 -- common/autotest_common.sh@936 -- # '[' -z 75875 ']' 00:25:51.029 14:44:59 -- common/autotest_common.sh@940 -- # kill -0 75875 00:25:51.029 14:44:59 -- common/autotest_common.sh@941 -- # uname 00:25:51.029 14:44:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:51.029 14:44:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75875 00:25:51.029 14:44:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:51.029 killing process with pid 75875 00:25:51.029 14:44:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:51.029 14:44:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75875' 00:25:51.029 14:44:59 -- common/autotest_common.sh@955 -- # kill 75875 00:25:51.029 14:44:59 -- common/autotest_common.sh@960 -- # wait 75875 00:25:52.412 [2024-04-17 14:45:01.001841] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:25:52.671 [2024-04-17 14:45:01.042571] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:52.671 [2024-04-17 14:45:01.072596] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:25:52.671 [2024-04-17 14:45:01.099536] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:52.671 [2024-04-17 14:45:01.099639] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:25:52.671 [2024-04-17 14:45:01.099650] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:25:52.671 [2024-04-17 14:45:01.099680] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:25:52.671 [2024-04-17 14:45:01.099866] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:25:54.575 14:45:02 -- ublk/ublk.sh@119 -- # tgtpid=75952 00:25:54.575 14:45:02 -- ublk/ublk.sh@121 -- # waitforlisten 75952 00:25:54.575 14:45:02 -- common/autotest_common.sh@817 -- # '[' -z 75952 ']' 00:25:54.575 14:45:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.575 14:45:02 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:25:54.575 14:45:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:54.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.575 14:45:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.575 14:45:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:54.575 14:45:02 -- common/autotest_common.sh@10 -- # set +x 00:25:54.575 14:45:02 -- ublk/ublk.sh@118 -- # echo '{ 00:25:54.575 "subsystems": [ 00:25:54.575 { 00:25:54.575 "subsystem": "keyring", 00:25:54.575 "config": [] 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "iobuf", 00:25:54.575 "config": [ 00:25:54.575 { 00:25:54.575 "method": "iobuf_set_options", 00:25:54.575 "params": { 00:25:54.575 "small_pool_count": 8192, 00:25:54.575 "large_pool_count": 1024, 00:25:54.575 "small_bufsize": 8192, 00:25:54.575 "large_bufsize": 135168 00:25:54.575 } 00:25:54.575 } 00:25:54.575 ] 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "sock", 00:25:54.575 "config": [ 00:25:54.575 { 00:25:54.575 "method": "sock_impl_set_options", 00:25:54.575 "params": { 00:25:54.575 "impl_name": "posix", 00:25:54.575 "recv_buf_size": 2097152, 00:25:54.575 "send_buf_size": 2097152, 00:25:54.575 "enable_recv_pipe": true, 00:25:54.575 "enable_quickack": false, 00:25:54.575 "enable_placement_id": 0, 00:25:54.575 "enable_zerocopy_send_server": true, 00:25:54.575 "enable_zerocopy_send_client": false, 00:25:54.575 "zerocopy_threshold": 0, 00:25:54.575 "tls_version": 0, 00:25:54.575 "enable_ktls": false 00:25:54.575 } 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "method": "sock_impl_set_options", 00:25:54.575 "params": { 00:25:54.575 "impl_name": "ssl", 00:25:54.575 "recv_buf_size": 4096, 00:25:54.575 "send_buf_size": 4096, 00:25:54.575 "enable_recv_pipe": true, 00:25:54.575 "enable_quickack": false, 00:25:54.575 "enable_placement_id": 0, 00:25:54.575 "enable_zerocopy_send_server": true, 00:25:54.575 "enable_zerocopy_send_client": false, 00:25:54.575 "zerocopy_threshold": 0, 00:25:54.575 "tls_version": 0, 00:25:54.575 "enable_ktls": false 00:25:54.575 } 00:25:54.575 } 00:25:54.575 ] 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "vmd", 00:25:54.575 "config": [] 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "accel", 00:25:54.575 "config": [ 00:25:54.575 { 00:25:54.575 "method": "accel_set_options", 00:25:54.575 "params": { 00:25:54.575 "small_cache_size": 128, 00:25:54.575 "large_cache_size": 16, 00:25:54.575 "task_count": 2048, 00:25:54.575 "sequence_count": 2048, 00:25:54.575 "buf_count": 2048 00:25:54.575 } 00:25:54.575 } 00:25:54.575 ] 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "bdev", 00:25:54.575 "config": [ 00:25:54.575 { 00:25:54.575 "method": "bdev_set_options", 00:25:54.575 "params": { 00:25:54.575 "bdev_io_pool_size": 65535, 00:25:54.575 "bdev_io_cache_size": 256, 00:25:54.575 "bdev_auto_examine": true, 00:25:54.575 "iobuf_small_cache_size": 128, 00:25:54.575 "iobuf_large_cache_size": 16 00:25:54.575 } 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "method": "bdev_raid_set_options", 00:25:54.575 "params": { 00:25:54.575 "process_window_size_kb": 1024 00:25:54.575 } 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "method": "bdev_iscsi_set_options", 00:25:54.575 "params": { 00:25:54.575 "timeout_sec": 30 00:25:54.575 } 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "method": "bdev_nvme_set_options", 00:25:54.575 "params": { 00:25:54.575 "action_on_timeout": "none", 00:25:54.575 "timeout_us": 0, 00:25:54.575 "timeout_admin_us": 0, 00:25:54.575 "keep_alive_timeout_ms": 10000, 00:25:54.575 "arbitration_burst": 0, 00:25:54.575 "low_priority_weight": 0, 00:25:54.575 "medium_priority_weight": 0, 00:25:54.575 "high_priority_weight": 0, 00:25:54.575 "nvme_adminq_poll_period_us": 10000, 00:25:54.575 "nvme_ioq_poll_period_us": 0, 00:25:54.575 "io_queue_requests": 0, 00:25:54.575 "delay_cmd_submit": true, 00:25:54.575 "transport_retry_count": 4, 00:25:54.575 "bdev_retry_count": 3, 00:25:54.575 "transport_ack_timeout": 0, 00:25:54.575 "ctrlr_loss_timeout_sec": 0, 00:25:54.575 "reconnect_delay_sec": 0, 00:25:54.575 "fast_io_fail_timeout_sec": 0, 00:25:54.575 "disable_auto_failback": false, 00:25:54.575 "generate_uuids": false, 00:25:54.575 "transport_tos": 0, 00:25:54.575 "nvme_error_stat": false, 00:25:54.575 "rdma_srq_size": 0, 00:25:54.575 "io_path_stat": false, 00:25:54.575 "allow_accel_sequence": false, 00:25:54.575 "rdma_max_cq_size": 0, 00:25:54.575 "rdma_cm_event_timeout_ms": 0, 00:25:54.575 "dhchap_digests": [ 00:25:54.575 "sha256", 00:25:54.575 "sha384", 00:25:54.575 "sha512" 00:25:54.575 ], 00:25:54.575 "dhchap_dhgroups": [ 00:25:54.575 "null", 00:25:54.575 "ffdhe2048", 00:25:54.575 "ffdhe3072", 00:25:54.575 "ffdhe4096", 00:25:54.575 "ffdhe6144", 00:25:54.575 "ffdhe8192" 00:25:54.575 ] 00:25:54.575 } 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "method": "bdev_nvme_set_hotplug", 00:25:54.575 "params": { 00:25:54.575 "period_us": 100000, 00:25:54.575 "enable": false 00:25:54.575 } 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "method": "bdev_malloc_create", 00:25:54.575 "params": { 00:25:54.575 "name": "malloc0", 00:25:54.575 "num_blocks": 8192, 00:25:54.575 "block_size": 4096, 00:25:54.575 "physical_block_size": 4096, 00:25:54.575 "uuid": "aac4e4b8-bda5-44b6-b49f-67ec108b3e91", 00:25:54.575 "optimal_io_boundary": 0 00:25:54.575 } 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "method": "bdev_wait_for_examine" 00:25:54.575 } 00:25:54.575 ] 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "scsi", 00:25:54.575 "config": null 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "scheduler", 00:25:54.575 "config": [ 00:25:54.575 { 00:25:54.575 "method": "framework_set_scheduler", 00:25:54.575 "params": { 00:25:54.575 "name": "static" 00:25:54.575 } 00:25:54.575 } 00:25:54.575 ] 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "vhost_scsi", 00:25:54.575 "config": [] 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "vhost_blk", 00:25:54.575 "config": [] 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "ublk", 00:25:54.575 "config": [ 00:25:54.575 { 00:25:54.575 "method": "ublk_create_target", 00:25:54.575 "params": { 00:25:54.575 "cpumask": "1" 00:25:54.575 } 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "method": "ublk_start_disk", 00:25:54.575 "params": { 00:25:54.575 "bdev_name": "malloc0", 00:25:54.575 "ublk_id": 0, 00:25:54.575 "num_queues": 1, 00:25:54.575 "queue_depth": 128 00:25:54.575 } 00:25:54.575 } 00:25:54.575 ] 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "nbd", 00:25:54.575 "config": [] 00:25:54.575 }, 00:25:54.575 { 00:25:54.575 "subsystem": "nvmf", 00:25:54.575 "config": [ 00:25:54.575 { 00:25:54.575 "method": "nvmf_set_config", 00:25:54.575 "params": { 00:25:54.575 "discovery_filter": "match_any", 00:25:54.575 "admin_cmd_passthru": { 00:25:54.575 "identify_ctrlr": false 00:25:54.575 } 00:25:54.576 } 00:25:54.576 }, 00:25:54.576 { 00:25:54.576 "method": "nvmf_set_max_subsystems", 00:25:54.576 "params": { 00:25:54.576 "max_subsystems": 1024 00:25:54.576 } 00:25:54.576 }, 00:25:54.576 { 00:25:54.576 "method": "nvmf_set_crdt", 00:25:54.576 "params": { 00:25:54.576 "crdt1": 0, 00:25:54.576 "crdt2": 0, 00:25:54.576 "crdt3": 0 00:25:54.576 } 00:25:54.576 } 00:25:54.576 ] 00:25:54.576 }, 00:25:54.576 { 00:25:54.576 "subsystem": "iscsi", 00:25:54.576 "config": [ 00:25:54.576 { 00:25:54.576 "method": "iscsi_set_options", 00:25:54.576 "params": { 00:25:54.576 "node_base": "iqn.2016-06.io.spdk", 00:25:54.576 "max_sessions": 128, 00:25:54.576 "max_connections_per_session": 2, 00:25:54.576 "max_queue_depth": 64, 00:25:54.576 "default_time2wait": 2, 00:25:54.576 "default_time2retain": 20, 00:25:54.576 "first_burst_length": 8192, 00:25:54.576 "immediate_data": true, 00:25:54.576 "allow_duplicated_isid": false, 00:25:54.576 "error_recovery_level": 0, 00:25:54.576 "nop_timeout": 60, 00:25:54.576 "nop_in_interval": 30, 00:25:54.576 "disable_chap": false, 00:25:54.576 "require_chap": false, 00:25:54.576 "mutual_chap": false, 00:25:54.576 "chap_group": 0, 00:25:54.576 "max_large_datain_per_connection": 64, 00:25:54.576 "max_r2t_per_connection": 4, 00:25:54.576 "pdu_pool_size": 36864, 00:25:54.576 "immediate_data_pool_size": 16384, 00:25:54.576 "data_out_pool_size": 2048 00:25:54.576 } 00:25:54.576 } 00:25:54.576 ] 00:25:54.576 } 00:25:54.576 ] 00:25:54.576 }' 00:25:54.576 [2024-04-17 14:45:02.803622] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:25:54.576 [2024-04-17 14:45:02.804647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75952 ] 00:25:54.576 [2024-04-17 14:45:03.009453] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.834 [2024-04-17 14:45:03.284715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.214 [2024-04-17 14:45:04.471942] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:56.214 [2024-04-17 14:45:04.476190] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:25:56.214 [2024-04-17 14:45:04.476320] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:25:56.214 [2024-04-17 14:45:04.476332] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:56.214 [2024-04-17 14:45:04.476342] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:56.214 [2024-04-17 14:45:04.495631] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:56.214 [2024-04-17 14:45:04.495670] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:56.214 [2024-04-17 14:45:04.522537] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:56.214 [2024-04-17 14:45:04.522706] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:56.214 [2024-04-17 14:45:04.570590] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:56.214 14:45:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:56.214 14:45:04 -- common/autotest_common.sh@850 -- # return 0 00:25:56.214 14:45:04 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:25:56.214 14:45:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.214 14:45:04 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:25:56.214 14:45:04 -- common/autotest_common.sh@10 -- # set +x 00:25:56.214 14:45:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.214 14:45:04 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:25:56.214 14:45:04 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:25:56.214 14:45:04 -- ublk/ublk.sh@125 -- # killprocess 75952 00:25:56.214 14:45:04 -- common/autotest_common.sh@936 -- # '[' -z 75952 ']' 00:25:56.214 14:45:04 -- common/autotest_common.sh@940 -- # kill -0 75952 00:25:56.214 14:45:04 -- common/autotest_common.sh@941 -- # uname 00:25:56.214 14:45:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:56.214 14:45:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75952 00:25:56.214 14:45:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:56.214 killing process with pid 75952 00:25:56.214 14:45:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:56.214 14:45:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75952' 00:25:56.214 14:45:04 -- common/autotest_common.sh@955 -- # kill 75952 00:25:56.214 14:45:04 -- common/autotest_common.sh@960 -- # wait 75952 00:25:58.116 [2024-04-17 14:45:06.342741] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:25:58.116 [2024-04-17 14:45:06.380581] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:58.116 [2024-04-17 14:45:06.410644] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:25:58.116 [2024-04-17 14:45:06.436547] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:58.116 [2024-04-17 14:45:06.436639] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:25:58.116 [2024-04-17 14:45:06.436650] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:25:58.116 [2024-04-17 14:45:06.436683] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:25:58.116 [2024-04-17 14:45:06.436865] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:25:59.490 14:45:08 -- ublk/ublk.sh@126 -- # trap - EXIT 00:25:59.490 00:25:59.490 real 0m10.795s 00:25:59.490 user 0m9.480s 00:25:59.490 sys 0m2.351s 00:25:59.491 14:45:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:59.491 ************************************ 00:25:59.491 END TEST test_save_ublk_config 00:25:59.491 ************************************ 00:25:59.491 14:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:59.491 14:45:08 -- ublk/ublk.sh@139 -- # spdk_pid=76038 00:25:59.491 14:45:08 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:25:59.491 14:45:08 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:59.491 14:45:08 -- ublk/ublk.sh@141 -- # waitforlisten 76038 00:25:59.491 14:45:08 -- common/autotest_common.sh@817 -- # '[' -z 76038 ']' 00:25:59.491 14:45:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.491 14:45:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:59.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.491 14:45:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.491 14:45:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:59.491 14:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:59.749 [2024-04-17 14:45:08.193229] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:25:59.749 [2024-04-17 14:45:08.193977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76038 ] 00:26:00.008 [2024-04-17 14:45:08.377751] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:00.267 [2024-04-17 14:45:08.633401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.267 [2024-04-17 14:45:08.633418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.203 14:45:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:01.203 14:45:09 -- common/autotest_common.sh@850 -- # return 0 00:26:01.203 14:45:09 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:26:01.203 14:45:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:01.203 14:45:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:01.203 14:45:09 -- common/autotest_common.sh@10 -- # set +x 00:26:01.463 ************************************ 00:26:01.463 START TEST test_create_ublk 00:26:01.463 ************************************ 00:26:01.463 14:45:09 -- common/autotest_common.sh@1111 -- # test_create_ublk 00:26:01.463 14:45:09 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:26:01.463 14:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.463 14:45:09 -- common/autotest_common.sh@10 -- # set +x 00:26:01.463 [2024-04-17 14:45:09.850151] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:01.463 14:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.463 14:45:09 -- ublk/ublk.sh@33 -- # ublk_target= 00:26:01.463 14:45:09 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:26:01.463 14:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.463 14:45:09 -- common/autotest_common.sh@10 -- # set +x 00:26:01.722 14:45:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.722 14:45:10 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:26:01.722 14:45:10 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:26:01.722 14:45:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.722 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:26:01.722 [2024-04-17 14:45:10.271345] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:26:01.722 [2024-04-17 14:45:10.271937] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:26:01.722 [2024-04-17 14:45:10.271959] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:26:01.722 [2024-04-17 14:45:10.271986] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:26:01.722 [2024-04-17 14:45:10.288533] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:01.722 [2024-04-17 14:45:10.288582] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:01.722 [2024-04-17 14:45:10.314529] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:01.981 [2024-04-17 14:45:10.324785] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:26:01.981 [2024-04-17 14:45:10.371533] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:26:01.981 14:45:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.981 14:45:10 -- ublk/ublk.sh@37 -- # ublk_id=0 00:26:01.981 14:45:10 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:26:01.981 14:45:10 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:26:01.981 14:45:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.981 14:45:10 -- common/autotest_common.sh@10 -- # set +x 00:26:01.981 14:45:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.981 14:45:10 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:26:01.981 { 00:26:01.981 "ublk_device": "/dev/ublkb0", 00:26:01.981 "id": 0, 00:26:01.981 "queue_depth": 512, 00:26:01.981 "num_queues": 4, 00:26:01.981 "bdev_name": "Malloc0" 00:26:01.981 } 00:26:01.981 ]' 00:26:01.981 14:45:10 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:26:01.981 14:45:10 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:26:01.981 14:45:10 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:26:01.981 14:45:10 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:26:01.981 14:45:10 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:26:01.981 14:45:10 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:26:01.981 14:45:10 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:26:01.981 14:45:10 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:26:01.981 14:45:10 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:26:02.240 14:45:10 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:26:02.240 14:45:10 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:26:02.240 14:45:10 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:26:02.240 14:45:10 -- lvol/common.sh@41 -- # local offset=0 00:26:02.240 14:45:10 -- lvol/common.sh@42 -- # local size=134217728 00:26:02.240 14:45:10 -- lvol/common.sh@43 -- # local rw=write 00:26:02.240 14:45:10 -- lvol/common.sh@44 -- # local pattern=0xcc 00:26:02.240 14:45:10 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:26:02.240 14:45:10 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:26:02.240 14:45:10 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:26:02.240 14:45:10 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:26:02.240 14:45:10 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:26:02.240 14:45:10 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:26:02.240 fio: verification read phase will never start because write phase uses all of runtime 00:26:02.240 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:26:02.240 fio-3.35 00:26:02.240 Starting 1 process 00:26:14.453 00:26:14.454 fio_test: (groupid=0, jobs=1): err= 0: pid=76099: Wed Apr 17 14:45:20 2024 00:26:14.454 write: IOPS=13.0k, BW=50.9MiB/s (53.3MB/s)(509MiB/10001msec); 0 zone resets 00:26:14.454 clat (usec): min=46, max=13096, avg=75.76, stdev=166.03 00:26:14.454 lat (usec): min=47, max=13096, avg=76.31, stdev=166.04 00:26:14.454 clat percentiles (usec): 00:26:14.454 | 1.00th=[ 62], 5.00th=[ 65], 10.00th=[ 66], 20.00th=[ 68], 00:26:14.454 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:26:14.454 | 70.00th=[ 76], 80.00th=[ 80], 90.00th=[ 85], 95.00th=[ 90], 00:26:14.454 | 99.00th=[ 103], 99.50th=[ 109], 99.90th=[ 125], 99.95th=[ 135], 00:26:14.454 | 99.99th=[12780] 00:26:14.454 bw ( KiB/s): min=24424, max=55664, per=99.79%, avg=51989.47, stdev=6911.29, samples=19 00:26:14.454 iops : min= 6106, max=13916, avg=12997.37, stdev=1727.82, samples=19 00:26:14.454 lat (usec) : 50=0.01%, 100=98.53%, 250=1.44%, 500=0.01%, 750=0.01% 00:26:14.454 lat (msec) : 20=0.02% 00:26:14.454 cpu : usr=3.07%, sys=10.40%, ctx=130318, majf=0, minf=798 00:26:14.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:14.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.454 issued rwts: total=0,130260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:14.454 00:26:14.454 Run status group 0 (all jobs): 00:26:14.454 WRITE: bw=50.9MiB/s (53.3MB/s), 50.9MiB/s-50.9MiB/s (53.3MB/s-53.3MB/s), io=509MiB (534MB), run=10001-10001msec 00:26:14.454 00:26:14.454 Disk stats (read/write): 00:26:14.454 ublkb0: ios=0/128810, merge=0/0, ticks=0/8639, in_queue=8640, util=99.10% 00:26:14.454 14:45:20 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:26:14.454 14:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.454 14:45:20 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 [2024-04-17 14:45:20.854427] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:26:14.454 [2024-04-17 14:45:20.893590] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:14.454 [2024-04-17 14:45:20.898081] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:26:14.454 [2024-04-17 14:45:20.912587] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:14.454 [2024-04-17 14:45:20.912987] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:26:14.454 [2024-04-17 14:45:20.913007] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:26:14.454 14:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.454 14:45:20 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:26:14.454 14:45:20 -- common/autotest_common.sh@638 -- # local es=0 00:26:14.454 14:45:20 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:26:14.454 14:45:20 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:14.454 14:45:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:14.454 14:45:20 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:14.454 14:45:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:14.454 14:45:20 -- common/autotest_common.sh@641 -- # rpc_cmd ublk_stop_disk 0 00:26:14.454 14:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.454 14:45:20 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 [2024-04-17 14:45:20.924727] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:26:14.454 request: 00:26:14.454 { 00:26:14.454 "ublk_id": 0, 00:26:14.454 "method": "ublk_stop_disk", 00:26:14.454 "req_id": 1 00:26:14.454 } 00:26:14.454 Got JSON-RPC error response 00:26:14.454 response: 00:26:14.454 { 00:26:14.454 "code": -19, 00:26:14.454 "message": "No such device" 00:26:14.454 } 00:26:14.454 14:45:20 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:14.454 14:45:20 -- common/autotest_common.sh@641 -- # es=1 00:26:14.454 14:45:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:14.454 14:45:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:14.454 14:45:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:14.454 14:45:20 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:26:14.454 14:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.454 14:45:20 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 [2024-04-17 14:45:20.956647] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:26:14.454 [2024-04-17 14:45:20.964847] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:26:14.454 [2024-04-17 14:45:20.964907] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:26:14.454 14:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.454 14:45:20 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:14.454 14:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.454 14:45:20 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 14:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.454 14:45:21 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:26:14.454 14:45:21 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:14.454 14:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.454 14:45:21 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 14:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.454 14:45:21 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:26:14.454 14:45:21 -- lvol/common.sh@26 -- # jq length 00:26:14.454 14:45:21 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:26:14.454 14:45:21 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:26:14.454 14:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.454 14:45:21 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 14:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.454 14:45:21 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:26:14.454 14:45:21 -- lvol/common.sh@28 -- # jq length 00:26:14.454 14:45:21 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:26:14.454 00:26:14.454 real 0m11.680s 00:26:14.454 user 0m0.676s 00:26:14.454 sys 0m1.156s 00:26:14.454 14:45:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:14.454 14:45:21 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 ************************************ 00:26:14.454 END TEST test_create_ublk 00:26:14.454 ************************************ 00:26:14.454 14:45:21 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:26:14.454 14:45:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:14.454 14:45:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:14.454 14:45:21 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 ************************************ 00:26:14.454 START TEST test_create_multi_ublk 00:26:14.454 ************************************ 00:26:14.454 14:45:21 -- common/autotest_common.sh@1111 -- # test_create_multi_ublk 00:26:14.454 14:45:21 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:26:14.454 14:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.454 14:45:21 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 [2024-04-17 14:45:21.644966] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:14.454 14:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.454 14:45:21 -- ublk/ublk.sh@62 -- # ublk_target= 00:26:14.454 14:45:21 -- ublk/ublk.sh@64 -- # seq 0 3 00:26:14.454 14:45:21 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:14.454 14:45:21 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:26:14.454 14:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.454 14:45:21 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 14:45:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.454 14:45:22 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:26:14.454 14:45:22 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:26:14.454 14:45:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.454 14:45:22 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 [2024-04-17 14:45:22.034673] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:26:14.454 [2024-04-17 14:45:22.035222] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:26:14.454 [2024-04-17 14:45:22.035242] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:26:14.454 [2024-04-17 14:45:22.035255] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:26:14.454 [2024-04-17 14:45:22.060526] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:14.454 [2024-04-17 14:45:22.060565] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:14.454 [2024-04-17 14:45:22.086550] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:14.454 [2024-04-17 14:45:22.087376] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:26:14.454 [2024-04-17 14:45:22.109601] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:26:14.454 14:45:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.454 14:45:22 -- ublk/ublk.sh@68 -- # ublk_id=0 00:26:14.454 14:45:22 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:14.454 14:45:22 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:26:14.454 14:45:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.454 14:45:22 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 14:45:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.454 14:45:22 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:26:14.454 14:45:22 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:26:14.454 14:45:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.454 14:45:22 -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 [2024-04-17 14:45:22.523705] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:26:14.454 [2024-04-17 14:45:22.524208] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:26:14.454 [2024-04-17 14:45:22.524230] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:26:14.454 [2024-04-17 14:45:22.524240] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:26:14.454 [2024-04-17 14:45:22.549564] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:14.454 [2024-04-17 14:45:22.549598] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:14.454 [2024-04-17 14:45:22.575555] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:14.454 [2024-04-17 14:45:22.576356] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:26:14.454 [2024-04-17 14:45:22.602589] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:26:14.455 14:45:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.455 14:45:22 -- ublk/ublk.sh@68 -- # ublk_id=1 00:26:14.455 14:45:22 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:14.455 14:45:22 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:26:14.455 14:45:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.455 14:45:22 -- common/autotest_common.sh@10 -- # set +x 00:26:14.455 14:45:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.455 14:45:22 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:26:14.455 14:45:22 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:26:14.455 14:45:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.455 14:45:22 -- common/autotest_common.sh@10 -- # set +x 00:26:14.455 [2024-04-17 14:45:23.006750] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:26:14.455 [2024-04-17 14:45:23.007276] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:26:14.455 [2024-04-17 14:45:23.007294] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:26:14.455 [2024-04-17 14:45:23.007307] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:26:14.455 [2024-04-17 14:45:23.032546] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:14.455 [2024-04-17 14:45:23.032607] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:14.714 [2024-04-17 14:45:23.058540] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:14.714 [2024-04-17 14:45:23.059282] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:26:14.714 [2024-04-17 14:45:23.085617] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:26:14.714 14:45:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.714 14:45:23 -- ublk/ublk.sh@68 -- # ublk_id=2 00:26:14.714 14:45:23 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:14.714 14:45:23 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:26:14.714 14:45:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.714 14:45:23 -- common/autotest_common.sh@10 -- # set +x 00:26:14.972 14:45:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.972 14:45:23 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:26:14.972 14:45:23 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:26:14.972 14:45:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.972 14:45:23 -- common/autotest_common.sh@10 -- # set +x 00:26:14.972 [2024-04-17 14:45:23.487307] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:26:14.972 [2024-04-17 14:45:23.487827] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:26:14.972 [2024-04-17 14:45:23.487850] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:26:14.972 [2024-04-17 14:45:23.487860] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:26:14.972 [2024-04-17 14:45:23.504547] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:14.972 [2024-04-17 14:45:23.504593] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:14.972 [2024-04-17 14:45:23.530541] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:14.972 [2024-04-17 14:45:23.531302] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:26:14.972 [2024-04-17 14:45:23.557592] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:26:14.972 14:45:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.972 14:45:23 -- ublk/ublk.sh@68 -- # ublk_id=3 00:26:14.972 14:45:23 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:26:14.972 14:45:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.972 14:45:23 -- common/autotest_common.sh@10 -- # set +x 00:26:15.231 14:45:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.231 14:45:23 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:26:15.231 { 00:26:15.231 "ublk_device": "/dev/ublkb0", 00:26:15.231 "id": 0, 00:26:15.231 "queue_depth": 512, 00:26:15.231 "num_queues": 4, 00:26:15.231 "bdev_name": "Malloc0" 00:26:15.231 }, 00:26:15.231 { 00:26:15.231 "ublk_device": "/dev/ublkb1", 00:26:15.231 "id": 1, 00:26:15.231 "queue_depth": 512, 00:26:15.231 "num_queues": 4, 00:26:15.231 "bdev_name": "Malloc1" 00:26:15.231 }, 00:26:15.231 { 00:26:15.231 "ublk_device": "/dev/ublkb2", 00:26:15.231 "id": 2, 00:26:15.231 "queue_depth": 512, 00:26:15.231 "num_queues": 4, 00:26:15.231 "bdev_name": "Malloc2" 00:26:15.231 }, 00:26:15.231 { 00:26:15.231 "ublk_device": "/dev/ublkb3", 00:26:15.231 "id": 3, 00:26:15.231 "queue_depth": 512, 00:26:15.231 "num_queues": 4, 00:26:15.231 "bdev_name": "Malloc3" 00:26:15.231 } 00:26:15.231 ]' 00:26:15.231 14:45:23 -- ublk/ublk.sh@72 -- # seq 0 3 00:26:15.231 14:45:23 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:15.231 14:45:23 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:26:15.231 14:45:23 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:26:15.231 14:45:23 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:26:15.231 14:45:23 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:26:15.231 14:45:23 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:26:15.231 14:45:23 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:15.231 14:45:23 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:26:15.231 14:45:23 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:15.231 14:45:23 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:26:15.231 14:45:23 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:26:15.231 14:45:23 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:15.231 14:45:23 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:26:15.489 14:45:23 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:26:15.489 14:45:23 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:26:15.489 14:45:23 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:26:15.489 14:45:23 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:26:15.489 14:45:23 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:15.489 14:45:23 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:26:15.489 14:45:24 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:15.489 14:45:24 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:26:15.489 14:45:24 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:26:15.489 14:45:24 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:15.489 14:45:24 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:26:15.747 14:45:24 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:26:15.747 14:45:24 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:26:15.747 14:45:24 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:26:15.747 14:45:24 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:26:15.747 14:45:24 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:15.747 14:45:24 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:26:15.747 14:45:24 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:15.747 14:45:24 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:26:15.747 14:45:24 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:26:15.747 14:45:24 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:15.747 14:45:24 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:26:16.005 14:45:24 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:26:16.005 14:45:24 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:26:16.005 14:45:24 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:26:16.005 14:45:24 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:26:16.005 14:45:24 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:16.005 14:45:24 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:26:16.005 14:45:24 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:16.005 14:45:24 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:26:16.005 14:45:24 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:26:16.005 14:45:24 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:26:16.005 14:45:24 -- ublk/ublk.sh@85 -- # seq 0 3 00:26:16.005 14:45:24 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:16.005 14:45:24 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:26:16.005 14:45:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.005 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:26:16.005 [2024-04-17 14:45:24.585643] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:26:16.262 [2024-04-17 14:45:24.630605] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:16.262 [2024-04-17 14:45:24.630872] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:26:16.263 [2024-04-17 14:45:24.655581] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:16.263 [2024-04-17 14:45:24.655986] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:26:16.263 [2024-04-17 14:45:24.656015] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:26:16.263 14:45:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.263 14:45:24 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:16.263 14:45:24 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:26:16.263 14:45:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.263 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:26:16.263 [2024-04-17 14:45:24.663730] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:26:16.263 [2024-04-17 14:45:24.704535] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:16.263 [2024-04-17 14:45:24.704810] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:26:16.263 [2024-04-17 14:45:24.731559] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:16.263 [2024-04-17 14:45:24.731901] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:26:16.263 [2024-04-17 14:45:24.731917] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:26:16.263 14:45:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.263 14:45:24 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:16.263 14:45:24 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:26:16.263 14:45:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.263 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:26:16.263 [2024-04-17 14:45:24.735652] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:26:16.263 [2024-04-17 14:45:24.779553] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:16.263 [2024-04-17 14:45:24.779882] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:26:16.263 [2024-04-17 14:45:24.806563] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:16.263 [2024-04-17 14:45:24.806988] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:26:16.263 [2024-04-17 14:45:24.807007] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:26:16.263 14:45:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.263 14:45:24 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:16.263 14:45:24 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:26:16.263 14:45:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.263 14:45:24 -- common/autotest_common.sh@10 -- # set +x 00:26:16.263 [2024-04-17 14:45:24.810695] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:26:16.263 [2024-04-17 14:45:24.850604] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:16.521 [2024-04-17 14:45:24.880737] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:26:16.521 [2024-04-17 14:45:24.906542] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:16.521 [2024-04-17 14:45:24.906910] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:26:16.521 [2024-04-17 14:45:24.906932] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:26:16.521 14:45:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.521 14:45:24 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:26:16.521 [2024-04-17 14:45:25.119669] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:26:16.780 [2024-04-17 14:45:25.126916] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:26:16.780 [2024-04-17 14:45:25.126971] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:26:16.780 14:45:25 -- ublk/ublk.sh@93 -- # seq 0 3 00:26:16.780 14:45:25 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:16.780 14:45:25 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:16.780 14:45:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.780 14:45:25 -- common/autotest_common.sh@10 -- # set +x 00:26:17.038 14:45:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:17.038 14:45:25 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:17.038 14:45:25 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:17.038 14:45:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:17.038 14:45:25 -- common/autotest_common.sh@10 -- # set +x 00:26:17.606 14:45:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:17.606 14:45:25 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:17.606 14:45:25 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:26:17.606 14:45:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:17.606 14:45:25 -- common/autotest_common.sh@10 -- # set +x 00:26:17.864 14:45:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:17.864 14:45:26 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:17.864 14:45:26 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:26:17.864 14:45:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:17.864 14:45:26 -- common/autotest_common.sh@10 -- # set +x 00:26:18.432 14:45:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.432 14:45:26 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:26:18.432 14:45:26 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:18.432 14:45:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.433 14:45:26 -- common/autotest_common.sh@10 -- # set +x 00:26:18.433 14:45:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.433 14:45:26 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:26:18.433 14:45:26 -- lvol/common.sh@26 -- # jq length 00:26:18.433 14:45:26 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:26:18.433 14:45:26 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:26:18.433 14:45:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.433 14:45:26 -- common/autotest_common.sh@10 -- # set +x 00:26:18.433 14:45:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.433 14:45:26 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:26:18.433 14:45:26 -- lvol/common.sh@28 -- # jq length 00:26:18.433 14:45:26 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:26:18.433 00:26:18.433 real 0m5.236s 00:26:18.433 user 0m1.123s 00:26:18.433 sys 0m0.223s 00:26:18.433 14:45:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:18.433 14:45:26 -- common/autotest_common.sh@10 -- # set +x 00:26:18.433 ************************************ 00:26:18.433 END TEST test_create_multi_ublk 00:26:18.433 ************************************ 00:26:18.433 14:45:26 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:18.433 14:45:26 -- ublk/ublk.sh@147 -- # cleanup 00:26:18.433 14:45:26 -- ublk/ublk.sh@130 -- # killprocess 76038 00:26:18.433 14:45:26 -- common/autotest_common.sh@936 -- # '[' -z 76038 ']' 00:26:18.433 14:45:26 -- common/autotest_common.sh@940 -- # kill -0 76038 00:26:18.433 14:45:26 -- common/autotest_common.sh@941 -- # uname 00:26:18.433 14:45:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:18.433 14:45:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76038 00:26:18.433 14:45:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:18.433 14:45:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:18.433 killing process with pid 76038 00:26:18.433 14:45:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76038' 00:26:18.433 14:45:26 -- common/autotest_common.sh@955 -- # kill 76038 00:26:18.433 14:45:26 -- common/autotest_common.sh@960 -- # wait 76038 00:26:19.846 [2024-04-17 14:45:28.257005] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:26:19.846 [2024-04-17 14:45:28.257074] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:26:21.268 00:26:21.268 real 0m32.722s 00:26:21.268 user 0m48.541s 00:26:21.268 sys 0m9.227s 00:26:21.268 14:45:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:21.268 14:45:29 -- common/autotest_common.sh@10 -- # set +x 00:26:21.268 ************************************ 00:26:21.268 END TEST ublk 00:26:21.268 ************************************ 00:26:21.268 14:45:29 -- spdk/autotest.sh@249 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:26:21.268 14:45:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:21.268 14:45:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:21.268 14:45:29 -- common/autotest_common.sh@10 -- # set +x 00:26:21.527 ************************************ 00:26:21.527 START TEST ublk_recovery 00:26:21.527 ************************************ 00:26:21.527 14:45:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:26:21.527 * Looking for test storage... 00:26:21.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:26:21.527 14:45:29 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:26:21.527 14:45:29 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:26:21.527 14:45:29 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:26:21.527 14:45:29 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:26:21.527 14:45:29 -- lvol/common.sh@9 -- # AIO_BS=4096 00:26:21.527 14:45:29 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:26:21.527 14:45:29 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:26:21.527 14:45:29 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:26:21.527 14:45:29 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:26:21.527 14:45:29 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:26:21.527 14:45:29 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76464 00:26:21.527 14:45:29 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:26:21.527 14:45:29 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:21.527 14:45:29 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76464 00:26:21.527 14:45:29 -- common/autotest_common.sh@817 -- # '[' -z 76464 ']' 00:26:21.527 14:45:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.527 14:45:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:21.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.527 14:45:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.527 14:45:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:21.527 14:45:29 -- common/autotest_common.sh@10 -- # set +x 00:26:21.786 [2024-04-17 14:45:30.140750] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:26:21.786 [2024-04-17 14:45:30.140925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76464 ] 00:26:21.786 [2024-04-17 14:45:30.332825] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:22.376 [2024-04-17 14:45:30.690201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.376 [2024-04-17 14:45:30.690231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.361 14:45:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:23.361 14:45:31 -- common/autotest_common.sh@850 -- # return 0 00:26:23.361 14:45:31 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:26:23.361 14:45:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.361 14:45:31 -- common/autotest_common.sh@10 -- # set +x 00:26:23.361 [2024-04-17 14:45:31.824005] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:23.361 14:45:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.361 14:45:31 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:26:23.361 14:45:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.361 14:45:31 -- common/autotest_common.sh@10 -- # set +x 00:26:23.620 malloc0 00:26:23.620 14:45:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.620 14:45:32 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:26:23.620 14:45:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.620 14:45:32 -- common/autotest_common.sh@10 -- # set +x 00:26:23.620 [2024-04-17 14:45:32.040251] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:26:23.620 [2024-04-17 14:45:32.040403] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:26:23.620 [2024-04-17 14:45:32.040420] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:26:23.620 [2024-04-17 14:45:32.040433] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:26:23.620 [2024-04-17 14:45:32.064519] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:23.620 [2024-04-17 14:45:32.064578] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:23.620 [2024-04-17 14:45:32.090519] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:23.620 [2024-04-17 14:45:32.090773] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:26:23.620 [2024-04-17 14:45:32.135520] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:26:23.620 1 00:26:23.620 14:45:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.620 14:45:32 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:26:24.635 14:45:33 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:26:24.635 14:45:33 -- ublk/ublk_recovery.sh@31 -- # fio_proc=76509 00:26:24.635 14:45:33 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:26:24.893 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:24.893 fio-3.35 00:26:24.893 Starting 1 process 00:26:30.156 14:45:38 -- ublk/ublk_recovery.sh@36 -- # kill -9 76464 00:26:30.156 14:45:38 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:26:35.441 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76464 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:26:35.442 14:45:43 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76640 00:26:35.442 14:45:43 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:26:35.442 14:45:43 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:35.442 14:45:43 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76640 00:26:35.442 14:45:43 -- common/autotest_common.sh@817 -- # '[' -z 76640 ']' 00:26:35.442 14:45:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.442 14:45:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:35.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.442 14:45:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.442 14:45:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:35.442 14:45:43 -- common/autotest_common.sh@10 -- # set +x 00:26:35.442 [2024-04-17 14:45:43.276217] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:26:35.442 [2024-04-17 14:45:43.276360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76640 ] 00:26:35.442 [2024-04-17 14:45:43.451638] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:35.442 [2024-04-17 14:45:43.777719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.442 [2024-04-17 14:45:43.777743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.392 14:45:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:36.392 14:45:44 -- common/autotest_common.sh@850 -- # return 0 00:26:36.392 14:45:44 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:26:36.392 14:45:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:36.392 14:45:44 -- common/autotest_common.sh@10 -- # set +x 00:26:36.392 [2024-04-17 14:45:44.905861] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:36.392 14:45:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.392 14:45:44 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:26:36.392 14:45:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:36.392 14:45:44 -- common/autotest_common.sh@10 -- # set +x 00:26:36.650 malloc0 00:26:36.650 14:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.650 14:45:45 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:26:36.650 14:45:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:36.650 14:45:45 -- common/autotest_common.sh@10 -- # set +x 00:26:36.650 [2024-04-17 14:45:45.125627] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:26:36.650 [2024-04-17 14:45:45.125686] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:26:36.650 [2024-04-17 14:45:45.125698] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:26:36.650 1 00:26:36.650 14:45:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.650 14:45:45 -- ublk/ublk_recovery.sh@52 -- # wait 76509 00:26:36.650 [2024-04-17 14:45:45.142549] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:26:36.650 [2024-04-17 14:45:45.142591] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:26:36.650 [2024-04-17 14:45:45.142715] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:26:36.650 [2024-04-17 14:45:45.168543] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:26:36.650 [2024-04-17 14:45:45.182205] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:26:36.650 [2024-04-17 14:45:45.208796] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:26:36.651 [2024-04-17 14:45:45.208839] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:27:33.101 00:27:33.101 fio_test: (groupid=0, jobs=1): err= 0: pid=76518: Wed Apr 17 14:46:33 2024 00:27:33.101 read: IOPS=19.0k, BW=74.1MiB/s (77.7MB/s)(4444MiB/60002msec) 00:27:33.101 slat (nsec): min=1819, max=1048.4k, avg=6250.99, stdev=3440.45 00:27:33.101 clat (usec): min=1269, max=7072.2k, avg=3406.66, stdev=57758.77 00:27:33.101 lat (usec): min=1278, max=7072.2k, avg=3412.92, stdev=57758.77 00:27:33.101 clat percentiles (usec): 00:27:33.101 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:27:33.101 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:27:33.101 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3097], 95.00th=[ 3130], 00:27:33.101 | 99.00th=[ 3556], 99.50th=[ 4113], 99.90th=[15270], 99.95th=[15795], 00:27:33.101 | 99.99th=[16319] 00:27:33.101 bw ( KiB/s): min=14616, max=93552, per=100.00%, avg=85111.32, stdev=9318.84, samples=106 00:27:33.101 iops : min= 3654, max=23388, avg=21277.83, stdev=2329.71, samples=106 00:27:33.101 write: IOPS=19.0k, BW=74.0MiB/s (77.6MB/s)(4443MiB/60002msec); 0 zone resets 00:27:33.101 slat (nsec): min=1894, max=973933, avg=6460.21, stdev=3710.29 00:27:33.101 clat (usec): min=1390, max=7072.6k, avg=3330.85, stdev=47790.27 00:27:33.101 lat (usec): min=1401, max=7072.6k, avg=3337.31, stdev=47790.26 00:27:33.101 clat percentiles (usec): 00:27:33.101 | 1.00th=[ 2606], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2835], 00:27:33.101 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3032], 00:27:33.101 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3163], 95.00th=[ 3228], 00:27:33.101 | 99.00th=[ 3654], 99.50th=[ 4228], 99.90th=[15270], 99.95th=[15926], 00:27:33.101 | 99.99th=[16319] 00:27:33.101 bw ( KiB/s): min=14360, max=92928, per=100.00%, avg=85082.87, stdev=9349.21, samples=106 00:27:33.101 iops : min= 3590, max=23232, avg=21270.72, stdev=2337.30, samples=106 00:27:33.101 lat (msec) : 2=0.01%, 4=99.33%, 10=0.44%, 20=0.23%, >=2000=0.01% 00:27:33.101 cpu : usr=10.10%, sys=23.85%, ctx=140939, majf=0, minf=13 00:27:33.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:27:33.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:33.101 issued rwts: total=1137747,1137333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.101 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:33.101 00:27:33.101 Run status group 0 (all jobs): 00:27:33.101 READ: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=4444MiB (4660MB), run=60002-60002msec 00:27:33.101 WRITE: bw=74.0MiB/s (77.6MB/s), 74.0MiB/s-74.0MiB/s (77.6MB/s-77.6MB/s), io=4443MiB (4659MB), run=60002-60002msec 00:27:33.101 00:27:33.101 Disk stats (read/write): 00:27:33.101 ublkb1: ios=1135028/1134687, merge=0/0, ticks=3821767/3657839, in_queue=7479606, util=99.90% 00:27:33.101 14:46:33 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:27:33.101 14:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:33.101 14:46:33 -- common/autotest_common.sh@10 -- # set +x 00:27:33.101 [2024-04-17 14:46:33.413655] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:27:33.101 [2024-04-17 14:46:33.457581] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:33.101 [2024-04-17 14:46:33.487727] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:27:33.101 [2024-04-17 14:46:33.513537] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:33.101 [2024-04-17 14:46:33.513705] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:27:33.101 [2024-04-17 14:46:33.513723] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:27:33.101 14:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:33.101 14:46:33 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:27:33.101 14:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:33.101 14:46:33 -- common/autotest_common.sh@10 -- # set +x 00:27:33.101 [2024-04-17 14:46:33.521631] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:27:33.101 [2024-04-17 14:46:33.539549] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:27:33.101 [2024-04-17 14:46:33.539605] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:27:33.101 14:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:33.101 14:46:33 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:27:33.101 14:46:33 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:27:33.101 14:46:33 -- ublk/ublk_recovery.sh@14 -- # killprocess 76640 00:27:33.101 14:46:33 -- common/autotest_common.sh@936 -- # '[' -z 76640 ']' 00:27:33.101 14:46:33 -- common/autotest_common.sh@940 -- # kill -0 76640 00:27:33.101 14:46:33 -- common/autotest_common.sh@941 -- # uname 00:27:33.101 14:46:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:33.101 14:46:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76640 00:27:33.101 14:46:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:33.101 14:46:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:33.101 killing process with pid 76640 00:27:33.101 14:46:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76640' 00:27:33.101 14:46:33 -- common/autotest_common.sh@955 -- # kill 76640 00:27:33.101 14:46:33 -- common/autotest_common.sh@960 -- # wait 76640 00:27:33.101 [2024-04-17 14:46:34.832549] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:27:33.101 [2024-04-17 14:46:34.832611] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:27:33.101 00:27:33.101 real 1m6.666s 00:27:33.101 user 1m46.186s 00:27:33.101 sys 0m35.972s 00:27:33.101 14:46:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:33.101 ************************************ 00:27:33.101 END TEST ublk_recovery 00:27:33.101 ************************************ 00:27:33.101 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:27:33.101 14:46:36 -- spdk/autotest.sh@253 -- # '[' 0 -eq 1 ']' 00:27:33.101 14:46:36 -- spdk/autotest.sh@257 -- # timing_exit lib 00:27:33.101 14:46:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:33.101 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:27:33.101 14:46:36 -- spdk/autotest.sh@259 -- # '[' 0 -eq 1 ']' 00:27:33.101 14:46:36 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:27:33.101 14:46:36 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:27:33.101 14:46:36 -- spdk/autotest.sh@305 -- # '[' 0 -eq 1 ']' 00:27:33.101 14:46:36 -- spdk/autotest.sh@309 -- # '[' 0 -eq 1 ']' 00:27:33.101 14:46:36 -- spdk/autotest.sh@313 -- # '[' 0 -eq 1 ']' 00:27:33.101 14:46:36 -- spdk/autotest.sh@318 -- # '[' 0 -eq 1 ']' 00:27:33.101 14:46:36 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:27:33.101 14:46:36 -- spdk/autotest.sh@332 -- # '[' 0 -eq 1 ']' 00:27:33.101 14:46:36 -- spdk/autotest.sh@336 -- # '[' 1 -eq 1 ']' 00:27:33.101 14:46:36 -- spdk/autotest.sh@337 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:33.101 14:46:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:33.101 14:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:33.101 14:46:36 -- common/autotest_common.sh@10 -- # set +x 00:27:33.101 ************************************ 00:27:33.101 START TEST ftl 00:27:33.101 ************************************ 00:27:33.101 14:46:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:33.101 * Looking for test storage... 00:27:33.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:33.101 14:46:36 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:33.102 14:46:36 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:33.102 14:46:36 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:33.102 14:46:36 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:33.102 14:46:36 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:33.102 14:46:36 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:33.102 14:46:36 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:33.102 14:46:36 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:33.102 14:46:36 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:33.102 14:46:36 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.102 14:46:36 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.102 14:46:36 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:33.102 14:46:36 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:33.102 14:46:36 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:33.102 14:46:36 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:33.102 14:46:36 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:33.102 14:46:36 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:33.102 14:46:36 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.102 14:46:36 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.102 14:46:36 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:33.102 14:46:36 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:33.102 14:46:36 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:33.102 14:46:36 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:33.102 14:46:36 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:33.102 14:46:36 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:33.102 14:46:36 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:33.102 14:46:36 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:33.102 14:46:36 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.102 14:46:36 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.102 14:46:36 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:33.102 14:46:36 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:27:33.102 14:46:36 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:27:33.102 14:46:36 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:27:33.102 14:46:36 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:27:33.102 14:46:36 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:33.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:33.102 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:33.102 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:33.102 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:33.102 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:33.102 14:46:37 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77462 00:27:33.102 14:46:37 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:27:33.102 14:46:37 -- ftl/ftl.sh@38 -- # waitforlisten 77462 00:27:33.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.102 14:46:37 -- common/autotest_common.sh@817 -- # '[' -z 77462 ']' 00:27:33.102 14:46:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.102 14:46:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:33.102 14:46:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.102 14:46:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:33.102 14:46:37 -- common/autotest_common.sh@10 -- # set +x 00:27:33.102 [2024-04-17 14:46:37.565462] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:27:33.102 [2024-04-17 14:46:37.565896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77462 ] 00:27:33.102 [2024-04-17 14:46:37.756501] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.102 [2024-04-17 14:46:38.030965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.102 14:46:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:33.102 14:46:38 -- common/autotest_common.sh@850 -- # return 0 00:27:33.102 14:46:38 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:27:33.102 14:46:38 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:27:33.102 14:46:39 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:27:33.102 14:46:39 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:33.102 14:46:40 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:27:33.102 14:46:40 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:27:33.102 14:46:40 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:27:33.102 14:46:40 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:27:33.102 14:46:40 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:27:33.102 14:46:40 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:27:33.102 14:46:40 -- ftl/ftl.sh@50 -- # break 00:27:33.102 14:46:40 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:27:33.102 14:46:40 -- ftl/ftl.sh@59 -- # base_size=1310720 00:27:33.102 14:46:40 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:27:33.102 14:46:40 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:27:33.102 14:46:41 -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:27:33.102 14:46:41 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:27:33.102 14:46:41 -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:27:33.102 14:46:41 -- ftl/ftl.sh@63 -- # break 00:27:33.102 14:46:41 -- ftl/ftl.sh@66 -- # killprocess 77462 00:27:33.102 14:46:41 -- common/autotest_common.sh@936 -- # '[' -z 77462 ']' 00:27:33.102 14:46:41 -- common/autotest_common.sh@940 -- # kill -0 77462 00:27:33.102 14:46:41 -- common/autotest_common.sh@941 -- # uname 00:27:33.102 14:46:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:33.102 14:46:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77462 00:27:33.102 14:46:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:33.102 14:46:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:33.102 14:46:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77462' 00:27:33.102 killing process with pid 77462 00:27:33.102 14:46:41 -- common/autotest_common.sh@955 -- # kill 77462 00:27:33.102 14:46:41 -- common/autotest_common.sh@960 -- # wait 77462 00:27:35.664 14:46:43 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:27:35.664 14:46:43 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:27:35.664 14:46:43 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:27:35.664 14:46:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:35.664 14:46:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:35.664 14:46:43 -- common/autotest_common.sh@10 -- # set +x 00:27:35.664 ************************************ 00:27:35.664 START TEST ftl_fio_basic 00:27:35.664 ************************************ 00:27:35.664 14:46:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:27:35.664 * Looking for test storage... 00:27:35.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:35.664 14:46:44 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:35.664 14:46:44 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:27:35.664 14:46:44 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:35.664 14:46:44 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:35.664 14:46:44 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:35.664 14:46:44 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:35.664 14:46:44 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:35.664 14:46:44 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:35.664 14:46:44 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:35.664 14:46:44 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:35.664 14:46:44 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:35.664 14:46:44 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:35.664 14:46:44 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:35.664 14:46:44 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:35.664 14:46:44 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:35.664 14:46:44 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:35.664 14:46:44 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:35.664 14:46:44 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:35.664 14:46:44 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:35.664 14:46:44 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:35.664 14:46:44 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:35.664 14:46:44 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:35.664 14:46:44 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:35.664 14:46:44 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:35.664 14:46:44 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:35.665 14:46:44 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:35.665 14:46:44 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:35.665 14:46:44 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.665 14:46:44 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.665 14:46:44 -- ftl/fio.sh@11 -- # declare -A suite 00:27:35.665 14:46:44 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:27:35.665 14:46:44 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:27:35.665 14:46:44 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:27:35.665 14:46:44 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:35.665 14:46:44 -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:27:35.665 14:46:44 -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:27:35.665 14:46:44 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:27:35.665 14:46:44 -- ftl/fio.sh@26 -- # uuid= 00:27:35.665 14:46:44 -- ftl/fio.sh@27 -- # timeout=240 00:27:35.665 14:46:44 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:27:35.665 14:46:44 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:27:35.665 14:46:44 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:27:35.665 14:46:44 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:27:35.665 14:46:44 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:35.665 14:46:44 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:35.665 14:46:44 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:27:35.665 14:46:44 -- ftl/fio.sh@45 -- # svcpid=77619 00:27:35.665 14:46:44 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:27:35.665 14:46:44 -- ftl/fio.sh@46 -- # waitforlisten 77619 00:27:35.665 14:46:44 -- common/autotest_common.sh@817 -- # '[' -z 77619 ']' 00:27:35.665 14:46:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.665 14:46:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:35.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.665 14:46:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.665 14:46:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:35.665 14:46:44 -- common/autotest_common.sh@10 -- # set +x 00:27:35.665 [2024-04-17 14:46:44.167988] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:27:35.665 [2024-04-17 14:46:44.168146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77619 ] 00:27:35.934 [2024-04-17 14:46:44.346720] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:36.192 [2024-04-17 14:46:44.683381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.192 [2024-04-17 14:46:44.683518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.192 [2024-04-17 14:46:44.683564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.605 14:46:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:37.605 14:46:45 -- common/autotest_common.sh@850 -- # return 0 00:27:37.605 14:46:45 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:37.605 14:46:45 -- ftl/common.sh@54 -- # local name=nvme0 00:27:37.605 14:46:45 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:37.605 14:46:45 -- ftl/common.sh@56 -- # local size=103424 00:27:37.605 14:46:45 -- ftl/common.sh@59 -- # local base_bdev 00:27:37.605 14:46:45 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:37.605 14:46:46 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:37.605 14:46:46 -- ftl/common.sh@62 -- # local base_size 00:27:37.605 14:46:46 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:37.605 14:46:46 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:27:37.605 14:46:46 -- common/autotest_common.sh@1365 -- # local bdev_info 00:27:37.605 14:46:46 -- common/autotest_common.sh@1366 -- # local bs 00:27:37.605 14:46:46 -- common/autotest_common.sh@1367 -- # local nb 00:27:37.605 14:46:46 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:37.864 14:46:46 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:27:37.864 { 00:27:37.864 "name": "nvme0n1", 00:27:37.864 "aliases": [ 00:27:37.864 "b23e8cf7-5636-4a6e-809e-f7a3f10549fb" 00:27:37.864 ], 00:27:37.864 "product_name": "NVMe disk", 00:27:37.864 "block_size": 4096, 00:27:37.864 "num_blocks": 1310720, 00:27:37.864 "uuid": "b23e8cf7-5636-4a6e-809e-f7a3f10549fb", 00:27:37.864 "assigned_rate_limits": { 00:27:37.864 "rw_ios_per_sec": 0, 00:27:37.864 "rw_mbytes_per_sec": 0, 00:27:37.864 "r_mbytes_per_sec": 0, 00:27:37.864 "w_mbytes_per_sec": 0 00:27:37.864 }, 00:27:37.864 "claimed": false, 00:27:37.864 "zoned": false, 00:27:37.864 "supported_io_types": { 00:27:37.864 "read": true, 00:27:37.864 "write": true, 00:27:37.864 "unmap": true, 00:27:37.864 "write_zeroes": true, 00:27:37.864 "flush": true, 00:27:37.864 "reset": true, 00:27:37.864 "compare": true, 00:27:37.864 "compare_and_write": false, 00:27:37.864 "abort": true, 00:27:37.864 "nvme_admin": true, 00:27:37.864 "nvme_io": true 00:27:37.864 }, 00:27:37.864 "driver_specific": { 00:27:37.864 "nvme": [ 00:27:37.864 { 00:27:37.864 "pci_address": "0000:00:11.0", 00:27:37.864 "trid": { 00:27:37.864 "trtype": "PCIe", 00:27:37.864 "traddr": "0000:00:11.0" 00:27:37.864 }, 00:27:37.864 "ctrlr_data": { 00:27:37.864 "cntlid": 0, 00:27:37.864 "vendor_id": "0x1b36", 00:27:37.864 "model_number": "QEMU NVMe Ctrl", 00:27:37.864 "serial_number": "12341", 00:27:37.864 "firmware_revision": "8.0.0", 00:27:37.864 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:37.864 "oacs": { 00:27:37.864 "security": 0, 00:27:37.864 "format": 1, 00:27:37.864 "firmware": 0, 00:27:37.864 "ns_manage": 1 00:27:37.864 }, 00:27:37.864 "multi_ctrlr": false, 00:27:37.864 "ana_reporting": false 00:27:37.864 }, 00:27:37.864 "vs": { 00:27:37.864 "nvme_version": "1.4" 00:27:37.864 }, 00:27:37.864 "ns_data": { 00:27:37.864 "id": 1, 00:27:37.864 "can_share": false 00:27:37.864 } 00:27:37.864 } 00:27:37.864 ], 00:27:37.864 "mp_policy": "active_passive" 00:27:37.864 } 00:27:37.864 } 00:27:37.864 ]' 00:27:37.864 14:46:46 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:27:37.865 14:46:46 -- common/autotest_common.sh@1369 -- # bs=4096 00:27:37.865 14:46:46 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:27:37.865 14:46:46 -- common/autotest_common.sh@1370 -- # nb=1310720 00:27:37.865 14:46:46 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:27:37.865 14:46:46 -- common/autotest_common.sh@1374 -- # echo 5120 00:27:37.865 14:46:46 -- ftl/common.sh@63 -- # base_size=5120 00:27:37.865 14:46:46 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:37.865 14:46:46 -- ftl/common.sh@67 -- # clear_lvols 00:27:37.865 14:46:46 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:37.865 14:46:46 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:38.146 14:46:46 -- ftl/common.sh@28 -- # stores= 00:27:38.146 14:46:46 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:38.404 14:46:46 -- ftl/common.sh@68 -- # lvs=dae0c005-611c-4161-ba1c-a71128898676 00:27:38.404 14:46:46 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dae0c005-611c-4161-ba1c-a71128898676 00:27:38.972 14:46:47 -- ftl/fio.sh@48 -- # split_bdev=28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:38.972 14:46:47 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:38.972 14:46:47 -- ftl/common.sh@35 -- # local name=nvc0 00:27:38.972 14:46:47 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:38.972 14:46:47 -- ftl/common.sh@37 -- # local base_bdev=28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:38.972 14:46:47 -- ftl/common.sh@38 -- # local cache_size= 00:27:38.972 14:46:47 -- ftl/common.sh@41 -- # get_bdev_size 28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:38.972 14:46:47 -- common/autotest_common.sh@1364 -- # local bdev_name=28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:38.972 14:46:47 -- common/autotest_common.sh@1365 -- # local bdev_info 00:27:38.972 14:46:47 -- common/autotest_common.sh@1366 -- # local bs 00:27:38.972 14:46:47 -- common/autotest_common.sh@1367 -- # local nb 00:27:38.972 14:46:47 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:38.972 14:46:47 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:27:38.972 { 00:27:38.972 "name": "28b58116-8f43-4a0b-bf5a-6666531ba86e", 00:27:38.972 "aliases": [ 00:27:38.972 "lvs/nvme0n1p0" 00:27:38.972 ], 00:27:38.972 "product_name": "Logical Volume", 00:27:38.972 "block_size": 4096, 00:27:38.972 "num_blocks": 26476544, 00:27:38.972 "uuid": "28b58116-8f43-4a0b-bf5a-6666531ba86e", 00:27:38.972 "assigned_rate_limits": { 00:27:38.972 "rw_ios_per_sec": 0, 00:27:38.972 "rw_mbytes_per_sec": 0, 00:27:38.972 "r_mbytes_per_sec": 0, 00:27:38.972 "w_mbytes_per_sec": 0 00:27:38.972 }, 00:27:38.972 "claimed": false, 00:27:38.972 "zoned": false, 00:27:38.972 "supported_io_types": { 00:27:38.972 "read": true, 00:27:38.972 "write": true, 00:27:38.972 "unmap": true, 00:27:38.972 "write_zeroes": true, 00:27:38.972 "flush": false, 00:27:38.972 "reset": true, 00:27:38.972 "compare": false, 00:27:38.972 "compare_and_write": false, 00:27:38.972 "abort": false, 00:27:38.972 "nvme_admin": false, 00:27:38.972 "nvme_io": false 00:27:38.972 }, 00:27:38.972 "driver_specific": { 00:27:38.972 "lvol": { 00:27:38.972 "lvol_store_uuid": "dae0c005-611c-4161-ba1c-a71128898676", 00:27:38.972 "base_bdev": "nvme0n1", 00:27:38.972 "thin_provision": true, 00:27:38.972 "snapshot": false, 00:27:38.972 "clone": false, 00:27:38.972 "esnap_clone": false 00:27:38.972 } 00:27:38.972 } 00:27:38.972 } 00:27:38.972 ]' 00:27:38.972 14:46:47 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:27:39.279 14:46:47 -- common/autotest_common.sh@1369 -- # bs=4096 00:27:39.279 14:46:47 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:27:39.279 14:46:47 -- common/autotest_common.sh@1370 -- # nb=26476544 00:27:39.279 14:46:47 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:27:39.279 14:46:47 -- common/autotest_common.sh@1374 -- # echo 103424 00:27:39.279 14:46:47 -- ftl/common.sh@41 -- # local base_size=5171 00:27:39.279 14:46:47 -- ftl/common.sh@44 -- # local nvc_bdev 00:27:39.279 14:46:47 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:39.537 14:46:48 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:39.537 14:46:48 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:39.537 14:46:48 -- ftl/common.sh@48 -- # get_bdev_size 28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:39.537 14:46:48 -- common/autotest_common.sh@1364 -- # local bdev_name=28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:39.537 14:46:48 -- common/autotest_common.sh@1365 -- # local bdev_info 00:27:39.537 14:46:48 -- common/autotest_common.sh@1366 -- # local bs 00:27:39.537 14:46:48 -- common/autotest_common.sh@1367 -- # local nb 00:27:39.537 14:46:48 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:39.795 14:46:48 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:27:39.795 { 00:27:39.795 "name": "28b58116-8f43-4a0b-bf5a-6666531ba86e", 00:27:39.795 "aliases": [ 00:27:39.795 "lvs/nvme0n1p0" 00:27:39.795 ], 00:27:39.795 "product_name": "Logical Volume", 00:27:39.795 "block_size": 4096, 00:27:39.795 "num_blocks": 26476544, 00:27:39.796 "uuid": "28b58116-8f43-4a0b-bf5a-6666531ba86e", 00:27:39.796 "assigned_rate_limits": { 00:27:39.796 "rw_ios_per_sec": 0, 00:27:39.796 "rw_mbytes_per_sec": 0, 00:27:39.796 "r_mbytes_per_sec": 0, 00:27:39.796 "w_mbytes_per_sec": 0 00:27:39.796 }, 00:27:39.796 "claimed": false, 00:27:39.796 "zoned": false, 00:27:39.796 "supported_io_types": { 00:27:39.796 "read": true, 00:27:39.796 "write": true, 00:27:39.796 "unmap": true, 00:27:39.796 "write_zeroes": true, 00:27:39.796 "flush": false, 00:27:39.796 "reset": true, 00:27:39.796 "compare": false, 00:27:39.796 "compare_and_write": false, 00:27:39.796 "abort": false, 00:27:39.796 "nvme_admin": false, 00:27:39.796 "nvme_io": false 00:27:39.796 }, 00:27:39.796 "driver_specific": { 00:27:39.796 "lvol": { 00:27:39.796 "lvol_store_uuid": "dae0c005-611c-4161-ba1c-a71128898676", 00:27:39.796 "base_bdev": "nvme0n1", 00:27:39.796 "thin_provision": true, 00:27:39.796 "snapshot": false, 00:27:39.796 "clone": false, 00:27:39.796 "esnap_clone": false 00:27:39.796 } 00:27:39.796 } 00:27:39.796 } 00:27:39.796 ]' 00:27:39.796 14:46:48 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:27:39.796 14:46:48 -- common/autotest_common.sh@1369 -- # bs=4096 00:27:39.796 14:46:48 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:27:39.796 14:46:48 -- common/autotest_common.sh@1370 -- # nb=26476544 00:27:39.796 14:46:48 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:27:39.796 14:46:48 -- common/autotest_common.sh@1374 -- # echo 103424 00:27:39.796 14:46:48 -- ftl/common.sh@48 -- # cache_size=5171 00:27:39.796 14:46:48 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:40.054 14:46:48 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:27:40.054 14:46:48 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:27:40.054 14:46:48 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:27:40.054 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:27:40.054 14:46:48 -- ftl/fio.sh@56 -- # get_bdev_size 28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:40.054 14:46:48 -- common/autotest_common.sh@1364 -- # local bdev_name=28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:40.054 14:46:48 -- common/autotest_common.sh@1365 -- # local bdev_info 00:27:40.054 14:46:48 -- common/autotest_common.sh@1366 -- # local bs 00:27:40.054 14:46:48 -- common/autotest_common.sh@1367 -- # local nb 00:27:40.054 14:46:48 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 28b58116-8f43-4a0b-bf5a-6666531ba86e 00:27:40.312 14:46:48 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:27:40.312 { 00:27:40.312 "name": "28b58116-8f43-4a0b-bf5a-6666531ba86e", 00:27:40.312 "aliases": [ 00:27:40.312 "lvs/nvme0n1p0" 00:27:40.312 ], 00:27:40.312 "product_name": "Logical Volume", 00:27:40.312 "block_size": 4096, 00:27:40.312 "num_blocks": 26476544, 00:27:40.312 "uuid": "28b58116-8f43-4a0b-bf5a-6666531ba86e", 00:27:40.312 "assigned_rate_limits": { 00:27:40.312 "rw_ios_per_sec": 0, 00:27:40.312 "rw_mbytes_per_sec": 0, 00:27:40.312 "r_mbytes_per_sec": 0, 00:27:40.312 "w_mbytes_per_sec": 0 00:27:40.312 }, 00:27:40.312 "claimed": false, 00:27:40.312 "zoned": false, 00:27:40.312 "supported_io_types": { 00:27:40.312 "read": true, 00:27:40.312 "write": true, 00:27:40.312 "unmap": true, 00:27:40.312 "write_zeroes": true, 00:27:40.312 "flush": false, 00:27:40.312 "reset": true, 00:27:40.312 "compare": false, 00:27:40.312 "compare_and_write": false, 00:27:40.312 "abort": false, 00:27:40.312 "nvme_admin": false, 00:27:40.312 "nvme_io": false 00:27:40.312 }, 00:27:40.312 "driver_specific": { 00:27:40.312 "lvol": { 00:27:40.312 "lvol_store_uuid": "dae0c005-611c-4161-ba1c-a71128898676", 00:27:40.312 "base_bdev": "nvme0n1", 00:27:40.312 "thin_provision": true, 00:27:40.312 "snapshot": false, 00:27:40.312 "clone": false, 00:27:40.312 "esnap_clone": false 00:27:40.312 } 00:27:40.312 } 00:27:40.312 } 00:27:40.312 ]' 00:27:40.312 14:46:48 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:27:40.312 14:46:48 -- common/autotest_common.sh@1369 -- # bs=4096 00:27:40.312 14:46:48 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:27:40.570 14:46:48 -- common/autotest_common.sh@1370 -- # nb=26476544 00:27:40.570 14:46:48 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:27:40.570 14:46:48 -- common/autotest_common.sh@1374 -- # echo 103424 00:27:40.570 14:46:48 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:27:40.570 14:46:48 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:27:40.570 14:46:48 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 28b58116-8f43-4a0b-bf5a-6666531ba86e -c nvc0n1p0 --l2p_dram_limit 60 00:27:40.829 [2024-04-17 14:46:49.205885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.830 [2024-04-17 14:46:49.205949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:40.830 [2024-04-17 14:46:49.205971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:40.830 [2024-04-17 14:46:49.205986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.830 [2024-04-17 14:46:49.206077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.830 [2024-04-17 14:46:49.206092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:40.830 [2024-04-17 14:46:49.206107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:40.830 [2024-04-17 14:46:49.206119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.830 [2024-04-17 14:46:49.206156] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:40.830 [2024-04-17 14:46:49.207486] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:40.830 [2024-04-17 14:46:49.207541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.830 [2024-04-17 14:46:49.207554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:40.830 [2024-04-17 14:46:49.207571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:27:40.830 [2024-04-17 14:46:49.207583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.830 [2024-04-17 14:46:49.207762] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a97b1eff-b992-4579-95ea-d17d01dd14aa 00:27:40.830 [2024-04-17 14:46:49.209403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.830 [2024-04-17 14:46:49.209451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:40.830 [2024-04-17 14:46:49.209466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:40.830 [2024-04-17 14:46:49.209482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.830 [2024-04-17 14:46:49.217467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.830 [2024-04-17 14:46:49.217549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:40.830 [2024-04-17 14:46:49.217567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.870 ms 00:27:40.830 [2024-04-17 14:46:49.217582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.830 [2024-04-17 14:46:49.217705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.830 [2024-04-17 14:46:49.217723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:40.830 [2024-04-17 14:46:49.217736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:40.830 [2024-04-17 14:46:49.217751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.830 [2024-04-17 14:46:49.217838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.830 [2024-04-17 14:46:49.217859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:40.830 [2024-04-17 14:46:49.217872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:40.830 [2024-04-17 14:46:49.217890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.830 [2024-04-17 14:46:49.217941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:40.830 [2024-04-17 14:46:49.225623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.830 [2024-04-17 14:46:49.225702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:40.830 [2024-04-17 14:46:49.225722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.686 ms 00:27:40.830 [2024-04-17 14:46:49.225743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.830 [2024-04-17 14:46:49.225823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.830 [2024-04-17 14:46:49.225837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:40.830 [2024-04-17 14:46:49.225853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:40.830 [2024-04-17 14:46:49.225865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.830 [2024-04-17 14:46:49.225940] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:40.830 [2024-04-17 14:46:49.226081] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:27:40.830 [2024-04-17 14:46:49.226104] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:40.830 [2024-04-17 14:46:49.226121] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:27:40.830 [2024-04-17 14:46:49.226140] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:40.830 [2024-04-17 14:46:49.226154] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:40.830 [2024-04-17 14:46:49.226177] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:40.830 [2024-04-17 14:46:49.226189] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:40.830 [2024-04-17 14:46:49.226204] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:27:40.830 [2024-04-17 14:46:49.226216] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:27:40.830 [2024-04-17 14:46:49.226233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.830 [2024-04-17 14:46:49.226245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:40.830 [2024-04-17 14:46:49.226259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:27:40.830 [2024-04-17 14:46:49.226272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.830 [2024-04-17 14:46:49.226372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.830 [2024-04-17 14:46:49.226386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:40.830 [2024-04-17 14:46:49.226401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:40.830 [2024-04-17 14:46:49.226416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.830 [2024-04-17 14:46:49.226527] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:40.830 [2024-04-17 14:46:49.226543] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:40.830 [2024-04-17 14:46:49.226563] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.830 [2024-04-17 14:46:49.226576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.830 [2024-04-17 14:46:49.226594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:40.830 [2024-04-17 14:46:49.226606] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:40.830 [2024-04-17 14:46:49.226620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:40.830 [2024-04-17 14:46:49.226631] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:40.830 [2024-04-17 14:46:49.226645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:40.830 [2024-04-17 14:46:49.226656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.830 [2024-04-17 14:46:49.226675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:40.830 [2024-04-17 14:46:49.226686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:40.830 [2024-04-17 14:46:49.226700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:40.830 [2024-04-17 14:46:49.226711] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:40.830 [2024-04-17 14:46:49.226726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:27:40.830 [2024-04-17 14:46:49.226741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.831 [2024-04-17 14:46:49.226755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:40.831 [2024-04-17 14:46:49.226766] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:27:40.831 [2024-04-17 14:46:49.226780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.831 [2024-04-17 14:46:49.226791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:27:40.831 [2024-04-17 14:46:49.226807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:27:40.831 [2024-04-17 14:46:49.226819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:27:40.831 [2024-04-17 14:46:49.226834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:40.831 [2024-04-17 14:46:49.226845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:40.831 [2024-04-17 14:46:49.226859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:40.831 [2024-04-17 14:46:49.226870] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:40.831 [2024-04-17 14:46:49.226884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:27:40.831 [2024-04-17 14:46:49.226895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:40.831 [2024-04-17 14:46:49.226909] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:40.831 [2024-04-17 14:46:49.226920] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:40.831 [2024-04-17 14:46:49.226934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:40.831 [2024-04-17 14:46:49.226945] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:40.831 [2024-04-17 14:46:49.226960] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:27:40.831 [2024-04-17 14:46:49.226971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:40.831 [2024-04-17 14:46:49.226984] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:40.831 [2024-04-17 14:46:49.226995] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:40.831 [2024-04-17 14:46:49.227035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.831 [2024-04-17 14:46:49.227047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:40.831 [2024-04-17 14:46:49.227063] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:27:40.831 [2024-04-17 14:46:49.227074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:40.831 [2024-04-17 14:46:49.227087] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:40.831 [2024-04-17 14:46:49.227099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:40.831 [2024-04-17 14:46:49.227113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:40.831 [2024-04-17 14:46:49.227125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:40.831 [2024-04-17 14:46:49.227143] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:40.831 [2024-04-17 14:46:49.227155] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:40.831 [2024-04-17 14:46:49.227168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:40.831 [2024-04-17 14:46:49.227184] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:40.831 [2024-04-17 14:46:49.227198] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:40.831 [2024-04-17 14:46:49.227210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:40.831 [2024-04-17 14:46:49.227226] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:40.831 [2024-04-17 14:46:49.227241] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.831 [2024-04-17 14:46:49.227260] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:40.831 [2024-04-17 14:46:49.227273] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:27:40.831 [2024-04-17 14:46:49.227290] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:27:40.831 [2024-04-17 14:46:49.227302] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:27:40.831 [2024-04-17 14:46:49.227318] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:27:40.831 [2024-04-17 14:46:49.227330] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:27:40.831 [2024-04-17 14:46:49.227345] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:27:40.831 [2024-04-17 14:46:49.227358] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:27:40.831 [2024-04-17 14:46:49.227373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:27:40.831 [2024-04-17 14:46:49.227386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:27:40.831 [2024-04-17 14:46:49.227401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:27:40.831 [2024-04-17 14:46:49.227414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:27:40.831 [2024-04-17 14:46:49.227431] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:27:40.831 [2024-04-17 14:46:49.227444] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:40.831 [2024-04-17 14:46:49.227459] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:40.831 [2024-04-17 14:46:49.227472] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:40.831 [2024-04-17 14:46:49.227501] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:40.831 [2024-04-17 14:46:49.227515] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:40.831 [2024-04-17 14:46:49.227530] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:40.831 [2024-04-17 14:46:49.227544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.831 [2024-04-17 14:46:49.227559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:40.831 [2024-04-17 14:46:49.227571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:27:40.831 [2024-04-17 14:46:49.227586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.831 [2024-04-17 14:46:49.257377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.831 [2024-04-17 14:46:49.257444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:40.831 [2024-04-17 14:46:49.257462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.688 ms 00:27:40.831 [2024-04-17 14:46:49.257478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.831 [2024-04-17 14:46:49.257631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.831 [2024-04-17 14:46:49.257655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:40.832 [2024-04-17 14:46:49.257672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:27:40.832 [2024-04-17 14:46:49.257687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.832 [2024-04-17 14:46:49.323536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.832 [2024-04-17 14:46:49.323604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:40.832 [2024-04-17 14:46:49.323622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.756 ms 00:27:40.832 [2024-04-17 14:46:49.323638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.832 [2024-04-17 14:46:49.323708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.832 [2024-04-17 14:46:49.323725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:40.832 [2024-04-17 14:46:49.323738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:40.832 [2024-04-17 14:46:49.323752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.832 [2024-04-17 14:46:49.324278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.832 [2024-04-17 14:46:49.324307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:40.832 [2024-04-17 14:46:49.324321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:27:40.832 [2024-04-17 14:46:49.324336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.832 [2024-04-17 14:46:49.324485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.832 [2024-04-17 14:46:49.324518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:40.832 [2024-04-17 14:46:49.324531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:27:40.832 [2024-04-17 14:46:49.324546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.832 [2024-04-17 14:46:49.364472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.832 [2024-04-17 14:46:49.364560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:40.832 [2024-04-17 14:46:49.364582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.891 ms 00:27:40.832 [2024-04-17 14:46:49.364603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.832 [2024-04-17 14:46:49.382799] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:40.832 [2024-04-17 14:46:49.402369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.832 [2024-04-17 14:46:49.402436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:40.832 [2024-04-17 14:46:49.402458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.566 ms 00:27:40.832 [2024-04-17 14:46:49.402471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.090 [2024-04-17 14:46:49.691704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:41.090 [2024-04-17 14:46:49.691769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:41.090 [2024-04-17 14:46:49.691792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 289.148 ms 00:27:41.090 [2024-04-17 14:46:49.691804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:41.090 [2024-04-17 14:46:49.691912] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:27:41.090 [2024-04-17 14:46:49.691930] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:27:47.652 [2024-04-17 14:46:55.478811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.652 [2024-04-17 14:46:55.478884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:47.652 [2024-04-17 14:46:55.478930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5786.860 ms 00:27:47.652 [2024-04-17 14:46:55.478943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.652 [2024-04-17 14:46:55.479175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.652 [2024-04-17 14:46:55.479190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:47.652 [2024-04-17 14:46:55.479210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:27:47.652 [2024-04-17 14:46:55.479222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.652 [2024-04-17 14:46:55.523953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.652 [2024-04-17 14:46:55.524039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:47.652 [2024-04-17 14:46:55.524085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.634 ms 00:27:47.652 [2024-04-17 14:46:55.524102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.652 [2024-04-17 14:46:55.566054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.652 [2024-04-17 14:46:55.566116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:47.652 [2024-04-17 14:46:55.566140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.871 ms 00:27:47.652 [2024-04-17 14:46:55.566152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.652 [2024-04-17 14:46:55.566716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.652 [2024-04-17 14:46:55.566737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:47.652 [2024-04-17 14:46:55.566756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:27:47.652 [2024-04-17 14:46:55.566768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.652 [2024-04-17 14:46:55.689847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.652 [2024-04-17 14:46:55.689915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:47.652 [2024-04-17 14:46:55.689937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 122.971 ms 00:27:47.652 [2024-04-17 14:46:55.689949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.652 [2024-04-17 14:46:55.736336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.652 [2024-04-17 14:46:55.736413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:47.652 [2024-04-17 14:46:55.736435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.222 ms 00:27:47.652 [2024-04-17 14:46:55.736447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.652 [2024-04-17 14:46:55.741397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.652 [2024-04-17 14:46:55.741435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:27:47.652 [2024-04-17 14:46:55.741453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.856 ms 00:27:47.652 [2024-04-17 14:46:55.741465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.652 [2024-04-17 14:46:55.785153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.652 [2024-04-17 14:46:55.785207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:47.652 [2024-04-17 14:46:55.785228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.565 ms 00:27:47.652 [2024-04-17 14:46:55.785239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.652 [2024-04-17 14:46:55.785363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.652 [2024-04-17 14:46:55.785377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:47.653 [2024-04-17 14:46:55.785392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:47.653 [2024-04-17 14:46:55.785402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.653 [2024-04-17 14:46:55.785575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.653 [2024-04-17 14:46:55.785589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:47.653 [2024-04-17 14:46:55.785604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:47.653 [2024-04-17 14:46:55.785615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.653 [2024-04-17 14:46:55.786871] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6580.419 ms, result 0 00:27:47.653 { 00:27:47.653 "name": "ftl0", 00:27:47.653 "uuid": "a97b1eff-b992-4579-95ea-d17d01dd14aa" 00:27:47.653 } 00:27:47.653 14:46:55 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:27:47.653 14:46:55 -- common/autotest_common.sh@885 -- # local bdev_name=ftl0 00:27:47.653 14:46:55 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:47.653 14:46:55 -- common/autotest_common.sh@887 -- # local i 00:27:47.653 14:46:55 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:47.653 14:46:55 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:47.653 14:46:55 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:47.653 14:46:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:27:47.922 [ 00:27:47.922 { 00:27:47.922 "name": "ftl0", 00:27:47.922 "aliases": [ 00:27:47.922 "a97b1eff-b992-4579-95ea-d17d01dd14aa" 00:27:47.922 ], 00:27:47.922 "product_name": "FTL disk", 00:27:47.922 "block_size": 4096, 00:27:47.922 "num_blocks": 20971520, 00:27:47.922 "uuid": "a97b1eff-b992-4579-95ea-d17d01dd14aa", 00:27:47.922 "assigned_rate_limits": { 00:27:47.922 "rw_ios_per_sec": 0, 00:27:47.922 "rw_mbytes_per_sec": 0, 00:27:47.922 "r_mbytes_per_sec": 0, 00:27:47.922 "w_mbytes_per_sec": 0 00:27:47.922 }, 00:27:47.922 "claimed": false, 00:27:47.922 "zoned": false, 00:27:47.922 "supported_io_types": { 00:27:47.922 "read": true, 00:27:47.922 "write": true, 00:27:47.922 "unmap": true, 00:27:47.922 "write_zeroes": true, 00:27:47.922 "flush": true, 00:27:47.922 "reset": false, 00:27:47.922 "compare": false, 00:27:47.922 "compare_and_write": false, 00:27:47.922 "abort": false, 00:27:47.922 "nvme_admin": false, 00:27:47.922 "nvme_io": false 00:27:47.922 }, 00:27:47.922 "driver_specific": { 00:27:47.922 "ftl": { 00:27:47.922 "base_bdev": "28b58116-8f43-4a0b-bf5a-6666531ba86e", 00:27:47.922 "cache": "nvc0n1p0" 00:27:47.922 } 00:27:47.922 } 00:27:47.922 } 00:27:47.922 ] 00:27:47.922 14:46:56 -- common/autotest_common.sh@893 -- # return 0 00:27:47.922 14:46:56 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:27:47.922 14:46:56 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:48.180 14:46:56 -- ftl/fio.sh@70 -- # echo ']}' 00:27:48.180 14:46:56 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:48.438 [2024-04-17 14:46:56.864099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.438 [2024-04-17 14:46:56.864321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:48.438 [2024-04-17 14:46:56.864423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:48.438 [2024-04-17 14:46:56.864529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.438 [2024-04-17 14:46:56.864639] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:48.438 [2024-04-17 14:46:56.868796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.438 [2024-04-17 14:46:56.868966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:48.438 [2024-04-17 14:46:56.869112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.026 ms 00:27:48.438 [2024-04-17 14:46:56.869218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.438 [2024-04-17 14:46:56.869816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.438 [2024-04-17 14:46:56.869952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:48.438 [2024-04-17 14:46:56.870049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:27:48.438 [2024-04-17 14:46:56.870142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.438 [2024-04-17 14:46:56.873222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.438 [2024-04-17 14:46:56.873347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:48.438 [2024-04-17 14:46:56.873447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:27:48.438 [2024-04-17 14:46:56.873547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.438 [2024-04-17 14:46:56.879564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.438 [2024-04-17 14:46:56.879719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:27:48.438 [2024-04-17 14:46:56.879810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.942 ms 00:27:48.438 [2024-04-17 14:46:56.879851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.438 [2024-04-17 14:46:56.925816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.438 [2024-04-17 14:46:56.926097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:48.438 [2024-04-17 14:46:56.926242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.781 ms 00:27:48.438 [2024-04-17 14:46:56.926367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.438 [2024-04-17 14:46:56.954688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.438 [2024-04-17 14:46:56.954927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:48.438 [2024-04-17 14:46:56.955027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.197 ms 00:27:48.438 [2024-04-17 14:46:56.955070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.438 [2024-04-17 14:46:56.955462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.438 [2024-04-17 14:46:56.955620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:48.438 [2024-04-17 14:46:56.955701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:27:48.438 [2024-04-17 14:46:56.955740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.438 [2024-04-17 14:46:57.002022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.438 [2024-04-17 14:46:57.002277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:48.438 [2024-04-17 14:46:57.002402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.213 ms 00:27:48.438 [2024-04-17 14:46:57.002452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.697 [2024-04-17 14:46:57.050064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.697 [2024-04-17 14:46:57.050350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:48.697 [2024-04-17 14:46:57.050478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.391 ms 00:27:48.697 [2024-04-17 14:46:57.050545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.697 [2024-04-17 14:46:57.097844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.697 [2024-04-17 14:46:57.098131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:48.697 [2024-04-17 14:46:57.098273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.111 ms 00:27:48.697 [2024-04-17 14:46:57.098384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.698 [2024-04-17 14:46:57.143429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.698 [2024-04-17 14:46:57.143723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:48.698 [2024-04-17 14:46:57.143853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.776 ms 00:27:48.698 [2024-04-17 14:46:57.143947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.698 [2024-04-17 14:46:57.144072] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:48.698 [2024-04-17 14:46:57.144176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.144309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.144421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.144609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.144761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.144925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.145032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.145160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.145325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.145469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.145557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.145680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.145780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.145851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.145966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.146043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.146137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.146251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.146391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.146473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.146644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.146768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.146840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.146918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.147027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.147176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.147364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.147540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.147651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.147721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.147843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.147920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.148005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.148118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.148245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.148326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.148439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.148530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.148619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.148736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.148855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.148972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.149090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.149266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.149423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.149613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.149808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.149971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.150045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.150115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.150225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.150388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.150547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.150630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.150701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.150819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.150951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.151104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.151179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.151307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.151503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.151668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.151775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.151891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.152035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.152175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.152249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.152372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.152520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.152601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.152726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.152851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.152961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.153040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.153119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.153232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.153379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.153508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.153579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.153705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.153777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.153860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.153970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.154109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:48.698 [2024-04-17 14:46:57.154232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.154364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.154442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.154551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.154708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.154833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.154951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.155066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.155158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.155315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.155482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.155645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.155780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.155864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.155938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.156123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:48.699 [2024-04-17 14:46:57.156259] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:48.699 [2024-04-17 14:46:57.156397] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a97b1eff-b992-4579-95ea-d17d01dd14aa 00:27:48.699 [2024-04-17 14:46:57.156588] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:48.699 [2024-04-17 14:46:57.156651] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:48.699 [2024-04-17 14:46:57.156710] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:48.699 [2024-04-17 14:46:57.156839] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:48.699 [2024-04-17 14:46:57.156893] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:48.699 [2024-04-17 14:46:57.157004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:48.699 [2024-04-17 14:46:57.157059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:48.699 [2024-04-17 14:46:57.157155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:48.699 [2024-04-17 14:46:57.157206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:48.699 [2024-04-17 14:46:57.157304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.699 [2024-04-17 14:46:57.157359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:48.699 [2024-04-17 14:46:57.157461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.234 ms 00:27:48.699 [2024-04-17 14:46:57.157525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.699 [2024-04-17 14:46:57.182486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.699 [2024-04-17 14:46:57.182766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:48.699 [2024-04-17 14:46:57.182867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.765 ms 00:27:48.699 [2024-04-17 14:46:57.182972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.699 [2024-04-17 14:46:57.183398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.699 [2024-04-17 14:46:57.183531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:48.699 [2024-04-17 14:46:57.183627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:27:48.699 [2024-04-17 14:46:57.183715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.699 [2024-04-17 14:46:57.267279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.699 [2024-04-17 14:46:57.267583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:48.699 [2024-04-17 14:46:57.267711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.699 [2024-04-17 14:46:57.267763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.699 [2024-04-17 14:46:57.267892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.699 [2024-04-17 14:46:57.267938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:48.699 [2024-04-17 14:46:57.268032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.699 [2024-04-17 14:46:57.268079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.699 [2024-04-17 14:46:57.268348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.699 [2024-04-17 14:46:57.268459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:48.699 [2024-04-17 14:46:57.268593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.699 [2024-04-17 14:46:57.268707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.699 [2024-04-17 14:46:57.268789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.699 [2024-04-17 14:46:57.268840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:48.699 [2024-04-17 14:46:57.268937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.699 [2024-04-17 14:46:57.269025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-04-17 14:46:57.430924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.958 [2024-04-17 14:46:57.431198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:48.958 [2024-04-17 14:46:57.431332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.958 [2024-04-17 14:46:57.431430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-04-17 14:46:57.488589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.958 [2024-04-17 14:46:57.488860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:48.958 [2024-04-17 14:46:57.488959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.958 [2024-04-17 14:46:57.489001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-04-17 14:46:57.489145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.958 [2024-04-17 14:46:57.489184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:48.958 [2024-04-17 14:46:57.489267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.958 [2024-04-17 14:46:57.489339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-04-17 14:46:57.489472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.958 [2024-04-17 14:46:57.489542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:48.958 [2024-04-17 14:46:57.489624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.958 [2024-04-17 14:46:57.489711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-04-17 14:46:57.489926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.958 [2024-04-17 14:46:57.490049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:48.958 [2024-04-17 14:46:57.490134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.958 [2024-04-17 14:46:57.490173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-04-17 14:46:57.490300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.958 [2024-04-17 14:46:57.490330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:48.958 [2024-04-17 14:46:57.490346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.958 [2024-04-17 14:46:57.490358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-04-17 14:46:57.490422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.958 [2024-04-17 14:46:57.490435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:48.958 [2024-04-17 14:46:57.490450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.958 [2024-04-17 14:46:57.490461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-04-17 14:46:57.490543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:48.958 [2024-04-17 14:46:57.490557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:48.958 [2024-04-17 14:46:57.490577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:48.958 [2024-04-17 14:46:57.490592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-04-17 14:46:57.490777] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 626.633 ms, result 0 00:27:48.958 true 00:27:48.958 14:46:57 -- ftl/fio.sh@75 -- # killprocess 77619 00:27:48.958 14:46:57 -- common/autotest_common.sh@936 -- # '[' -z 77619 ']' 00:27:48.958 14:46:57 -- common/autotest_common.sh@940 -- # kill -0 77619 00:27:48.958 14:46:57 -- common/autotest_common.sh@941 -- # uname 00:27:48.958 14:46:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:48.958 14:46:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77619 00:27:48.958 14:46:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:48.958 14:46:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:48.958 14:46:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77619' 00:27:48.958 killing process with pid 77619 00:27:48.958 14:46:57 -- common/autotest_common.sh@955 -- # kill 77619 00:27:48.958 14:46:57 -- common/autotest_common.sh@960 -- # wait 77619 00:27:58.930 14:47:06 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:58.930 14:47:06 -- ftl/fio.sh@78 -- # for test in ${tests} 00:27:58.930 14:47:06 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:27:58.930 14:47:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:58.930 14:47:06 -- common/autotest_common.sh@10 -- # set +x 00:27:58.930 14:47:06 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:58.930 14:47:06 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:58.930 14:47:06 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:58.930 14:47:06 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:58.930 14:47:06 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:58.930 14:47:06 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:58.930 14:47:06 -- common/autotest_common.sh@1327 -- # shift 00:27:58.930 14:47:06 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:58.930 14:47:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:58.930 14:47:06 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:58.930 14:47:06 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:58.930 14:47:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:58.930 14:47:06 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:58.930 14:47:06 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:58.930 14:47:06 -- common/autotest_common.sh@1333 -- # break 00:27:58.930 14:47:06 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:58.930 14:47:06 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:58.930 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:27:58.930 fio-3.35 00:27:58.930 Starting 1 thread 00:28:04.264 00:28:04.264 test: (groupid=0, jobs=1): err= 0: pid=77870: Wed Apr 17 14:47:12 2024 00:28:04.264 read: IOPS=1067, BW=70.9MiB/s (74.3MB/s)(255MiB/3590msec) 00:28:04.264 slat (nsec): min=4561, max=34629, avg=7009.03, stdev=3066.19 00:28:04.264 clat (usec): min=280, max=983, avg=412.47, stdev=59.54 00:28:04.264 lat (usec): min=287, max=998, avg=419.48, stdev=59.99 00:28:04.264 clat percentiles (usec): 00:28:04.264 | 1.00th=[ 310], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 355], 00:28:04.264 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 408], 60.00th=[ 420], 00:28:04.264 | 70.00th=[ 441], 80.00th=[ 465], 90.00th=[ 486], 95.00th=[ 506], 00:28:04.264 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 717], 99.95th=[ 848], 00:28:04.264 | 99.99th=[ 988] 00:28:04.264 write: IOPS=1075, BW=71.4MiB/s (74.9MB/s)(256MiB/3586msec); 0 zone resets 00:28:04.264 slat (nsec): min=15947, max=94383, avg=22980.99, stdev=5873.87 00:28:04.264 clat (usec): min=303, max=1261, avg=477.30, stdev=67.39 00:28:04.264 lat (usec): min=334, max=1280, avg=500.28, stdev=67.79 00:28:04.264 clat percentiles (usec): 00:28:04.264 | 1.00th=[ 355], 5.00th=[ 392], 10.00th=[ 412], 20.00th=[ 424], 00:28:04.264 | 30.00th=[ 437], 40.00th=[ 449], 50.00th=[ 474], 60.00th=[ 490], 00:28:04.264 | 70.00th=[ 502], 80.00th=[ 523], 90.00th=[ 562], 95.00th=[ 586], 00:28:04.264 | 99.00th=[ 701], 99.50th=[ 775], 99.90th=[ 898], 99.95th=[ 1057], 00:28:04.264 | 99.99th=[ 1254] 00:28:04.264 bw ( KiB/s): min=70040, max=76568, per=100.00%, avg=73168.00, stdev=2248.46, samples=7 00:28:04.264 iops : min= 1030, max= 1126, avg=1076.00, stdev=33.07, samples=7 00:28:04.264 lat (usec) : 500=81.40%, 750=18.26%, 1000=0.31% 00:28:04.264 lat (msec) : 2=0.03% 00:28:04.264 cpu : usr=99.22%, sys=0.14%, ctx=30, majf=0, minf=1171 00:28:04.264 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.264 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.264 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:04.264 00:28:04.264 Run status group 0 (all jobs): 00:28:04.264 READ: bw=70.9MiB/s (74.3MB/s), 70.9MiB/s-70.9MiB/s (74.3MB/s-74.3MB/s), io=255MiB (267MB), run=3590-3590msec 00:28:04.264 WRITE: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=256MiB (269MB), run=3586-3586msec 00:28:06.203 ----------------------------------------------------- 00:28:06.203 Suppressions used: 00:28:06.203 count bytes template 00:28:06.203 1 5 /usr/src/fio/parse.c 00:28:06.203 1 8 libtcmalloc_minimal.so 00:28:06.203 1 904 libcrypto.so 00:28:06.203 ----------------------------------------------------- 00:28:06.203 00:28:06.203 14:47:14 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:28:06.203 14:47:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:06.203 14:47:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.203 14:47:14 -- ftl/fio.sh@78 -- # for test in ${tests} 00:28:06.203 14:47:14 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:28:06.203 14:47:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:06.203 14:47:14 -- common/autotest_common.sh@10 -- # set +x 00:28:06.203 14:47:14 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:28:06.203 14:47:14 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:28:06.203 14:47:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:06.203 14:47:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:06.203 14:47:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:06.203 14:47:14 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:06.203 14:47:14 -- common/autotest_common.sh@1327 -- # shift 00:28:06.203 14:47:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:06.203 14:47:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.203 14:47:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:06.203 14:47:14 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:06.203 14:47:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:06.203 14:47:14 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:06.203 14:47:14 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:06.203 14:47:14 -- common/autotest_common.sh@1333 -- # break 00:28:06.203 14:47:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:06.203 14:47:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:28:06.203 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:28:06.204 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:28:06.204 fio-3.35 00:28:06.204 Starting 2 threads 00:28:32.758 00:28:32.758 first_half: (groupid=0, jobs=1): err= 0: pid=77979: Wed Apr 17 14:47:41 2024 00:28:32.758 read: IOPS=2633, BW=10.3MiB/s (10.8MB/s)(256MiB/24859msec) 00:28:32.758 slat (usec): min=3, max=530, avg= 7.03, stdev= 3.22 00:28:32.758 clat (usec): min=688, max=306100, avg=40723.92, stdev=26896.80 00:28:32.758 lat (usec): min=693, max=306110, avg=40730.95, stdev=26897.28 00:28:32.758 clat percentiles (msec): 00:28:32.758 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:28:32.758 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:28:32.758 | 70.00th=[ 38], 80.00th=[ 40], 90.00th=[ 47], 95.00th=[ 82], 00:28:32.758 | 99.00th=[ 184], 99.50th=[ 203], 99.90th=[ 236], 99.95th=[ 268], 00:28:32.758 | 99.99th=[ 300] 00:28:32.758 write: IOPS=2641, BW=10.3MiB/s (10.8MB/s)(256MiB/24813msec); 0 zone resets 00:28:32.758 slat (usec): min=4, max=1258, avg= 8.42, stdev= 8.68 00:28:32.758 clat (usec): min=408, max=48728, avg=7830.42, stdev=7484.00 00:28:32.758 lat (usec): min=415, max=48734, avg=7838.84, stdev=7484.25 00:28:32.758 clat percentiles (usec): 00:28:32.758 | 1.00th=[ 1029], 5.00th=[ 1369], 10.00th=[ 1778], 20.00th=[ 3195], 00:28:32.758 | 30.00th=[ 4178], 40.00th=[ 5276], 50.00th=[ 6194], 60.00th=[ 6980], 00:28:32.758 | 70.00th=[ 8094], 80.00th=[ 9896], 90.00th=[13829], 95.00th=[22676], 00:28:32.758 | 99.00th=[41157], 99.50th=[43779], 99.90th=[46400], 99.95th=[46924], 00:28:32.758 | 99.99th=[47973] 00:28:32.758 bw ( KiB/s): min= 3416, max=43008, per=100.00%, avg=21697.29, stdev=13628.79, samples=24 00:28:32.758 iops : min= 854, max=10752, avg=5424.29, stdev=3407.16, samples=24 00:28:32.758 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.35% 00:28:32.758 lat (msec) : 2=5.66%, 4=8.12%, 10=26.69%, 20=7.87%, 50=47.47% 00:28:32.758 lat (msec) : 100=1.70%, 250=2.04%, 500=0.03% 00:28:32.758 cpu : usr=98.38%, sys=0.51%, ctx=187, majf=0, minf=5526 00:28:32.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:32.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.758 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:32.758 issued rwts: total=65476,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:32.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:32.758 second_half: (groupid=0, jobs=1): err= 0: pid=77980: Wed Apr 17 14:47:41 2024 00:28:32.758 read: IOPS=2660, BW=10.4MiB/s (10.9MB/s)(256MiB/24613msec) 00:28:32.758 slat (nsec): min=3956, max=59306, avg=8005.92, stdev=3219.04 00:28:32.758 clat (msec): min=8, max=267, avg=41.16, stdev=24.87 00:28:32.758 lat (msec): min=8, max=267, avg=41.17, stdev=24.87 00:28:32.758 clat percentiles (msec): 00:28:32.758 | 1.00th=[ 30], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:28:32.758 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:28:32.758 | 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 48], 95.00th=[ 79], 00:28:32.758 | 99.00th=[ 176], 99.50th=[ 192], 99.90th=[ 211], 99.95th=[ 224], 00:28:32.758 | 99.99th=[ 255] 00:28:32.758 write: IOPS=2677, BW=10.5MiB/s (11.0MB/s)(256MiB/24476msec); 0 zone resets 00:28:32.758 slat (usec): min=4, max=1154, avg= 8.92, stdev= 9.67 00:28:32.758 clat (usec): min=434, max=39252, avg=6908.23, stdev=4166.93 00:28:32.758 lat (usec): min=448, max=39259, avg=6917.16, stdev=4167.57 00:28:32.758 clat percentiles (usec): 00:28:32.758 | 1.00th=[ 1205], 5.00th=[ 1958], 10.00th=[ 2704], 20.00th=[ 3654], 00:28:32.758 | 30.00th=[ 4621], 40.00th=[ 5407], 50.00th=[ 6259], 60.00th=[ 6849], 00:28:32.758 | 70.00th=[ 7767], 80.00th=[ 9241], 90.00th=[12518], 95.00th=[14091], 00:28:32.758 | 99.00th=[20841], 99.50th=[28967], 99.90th=[35914], 99.95th=[37487], 00:28:32.758 | 99.99th=[38536] 00:28:32.758 bw ( KiB/s): min= 2064, max=41968, per=100.00%, avg=21845.33, stdev=13631.93, samples=24 00:28:32.758 iops : min= 516, max=10492, avg=5461.33, stdev=3407.98, samples=24 00:28:32.758 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.17% 00:28:32.758 lat (msec) : 2=2.35%, 4=9.61%, 10=28.84%, 20=8.50%, 50=46.54% 00:28:32.758 lat (msec) : 100=1.90%, 250=2.02%, 500=0.01% 00:28:32.758 cpu : usr=99.22%, sys=0.18%, ctx=54, majf=0, minf=5593 00:28:32.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:32.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:32.758 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:32.758 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:32.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:32.758 00:28:32.758 Run status group 0 (all jobs): 00:28:32.758 READ: bw=20.6MiB/s (21.6MB/s), 10.3MiB/s-10.4MiB/s (10.8MB/s-10.9MB/s), io=512MiB (536MB), run=24613-24859msec 00:28:32.758 WRITE: bw=20.6MiB/s (21.6MB/s), 10.3MiB/s-10.5MiB/s (10.8MB/s-11.0MB/s), io=512MiB (537MB), run=24476-24813msec 00:28:36.044 ----------------------------------------------------- 00:28:36.044 Suppressions used: 00:28:36.044 count bytes template 00:28:36.044 2 10 /usr/src/fio/parse.c 00:28:36.044 4 384 /usr/src/fio/iolog.c 00:28:36.044 1 8 libtcmalloc_minimal.so 00:28:36.044 1 904 libcrypto.so 00:28:36.044 ----------------------------------------------------- 00:28:36.044 00:28:36.044 14:47:43 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:28:36.044 14:47:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:36.044 14:47:43 -- common/autotest_common.sh@10 -- # set +x 00:28:36.044 14:47:43 -- ftl/fio.sh@78 -- # for test in ${tests} 00:28:36.044 14:47:43 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:28:36.044 14:47:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:36.044 14:47:43 -- common/autotest_common.sh@10 -- # set +x 00:28:36.044 14:47:44 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:36.044 14:47:44 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:36.044 14:47:44 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:28:36.044 14:47:44 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:36.044 14:47:44 -- common/autotest_common.sh@1325 -- # local sanitizers 00:28:36.044 14:47:44 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:36.044 14:47:44 -- common/autotest_common.sh@1327 -- # shift 00:28:36.044 14:47:44 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:28:36.044 14:47:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.044 14:47:44 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:36.044 14:47:44 -- common/autotest_common.sh@1331 -- # grep libasan 00:28:36.044 14:47:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:28:36.044 14:47:44 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:36.044 14:47:44 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:36.044 14:47:44 -- common/autotest_common.sh@1333 -- # break 00:28:36.044 14:47:44 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:36.044 14:47:44 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:36.044 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:28:36.044 fio-3.35 00:28:36.044 Starting 1 thread 00:28:52.046 00:28:52.046 test: (groupid=0, jobs=1): err= 0: pid=78310: Wed Apr 17 14:47:59 2024 00:28:52.046 read: IOPS=7404, BW=28.9MiB/s (30.3MB/s)(255MiB/8806msec) 00:28:52.046 slat (usec): min=4, max=1495, avg= 6.10, stdev= 6.07 00:28:52.046 clat (usec): min=725, max=32529, avg=17277.72, stdev=1583.49 00:28:52.046 lat (usec): min=730, max=32535, avg=17283.81, stdev=1583.58 00:28:52.046 clat percentiles (usec): 00:28:52.046 | 1.00th=[15926], 5.00th=[16188], 10.00th=[16188], 20.00th=[16450], 00:28:52.046 | 30.00th=[16581], 40.00th=[16581], 50.00th=[16712], 60.00th=[16909], 00:28:52.046 | 70.00th=[17171], 80.00th=[17433], 90.00th=[20055], 95.00th=[20579], 00:28:52.046 | 99.00th=[22414], 99.50th=[24773], 99.90th=[26608], 99.95th=[28443], 00:28:52.046 | 99.99th=[31851] 00:28:52.046 write: IOPS=12.6k, BW=49.4MiB/s (51.8MB/s)(256MiB/5185msec); 0 zone resets 00:28:52.046 slat (usec): min=5, max=138, avg= 8.84, stdev= 4.40 00:28:52.046 clat (usec): min=616, max=58325, avg=10071.29, stdev=12226.22 00:28:52.046 lat (usec): min=626, max=58335, avg=10080.13, stdev=12226.21 00:28:52.046 clat percentiles (usec): 00:28:52.046 | 1.00th=[ 889], 5.00th=[ 1037], 10.00th=[ 1139], 20.00th=[ 1303], 00:28:52.046 | 30.00th=[ 1500], 40.00th=[ 2057], 50.00th=[ 6783], 60.00th=[ 8029], 00:28:52.046 | 70.00th=[ 9503], 80.00th=[11469], 90.00th=[34866], 95.00th=[37487], 00:28:52.046 | 99.00th=[43779], 99.50th=[45351], 99.90th=[48497], 99.95th=[49546], 00:28:52.046 | 99.99th=[56886] 00:28:52.046 bw ( KiB/s): min=16608, max=67400, per=94.27%, avg=47662.55, stdev=13349.07, samples=11 00:28:52.046 iops : min= 4152, max=16850, avg=11915.64, stdev=3337.27, samples=11 00:28:52.046 lat (usec) : 750=0.04%, 1000=1.81% 00:28:52.046 lat (msec) : 2=18.07%, 4=1.13%, 10=15.93%, 20=50.09%, 50=12.91% 00:28:52.046 lat (msec) : 100=0.02% 00:28:52.046 cpu : usr=98.98%, sys=0.29%, ctx=31, majf=0, minf=5567 00:28:52.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:52.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.046 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:52.046 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:52.046 00:28:52.046 Run status group 0 (all jobs): 00:28:52.046 READ: bw=28.9MiB/s (30.3MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=255MiB (267MB), run=8806-8806msec 00:28:52.046 WRITE: bw=49.4MiB/s (51.8MB/s), 49.4MiB/s-49.4MiB/s (51.8MB/s-51.8MB/s), io=256MiB (268MB), run=5185-5185msec 00:28:53.446 ----------------------------------------------------- 00:28:53.447 Suppressions used: 00:28:53.447 count bytes template 00:28:53.447 1 5 /usr/src/fio/parse.c 00:28:53.447 2 192 /usr/src/fio/iolog.c 00:28:53.447 1 8 libtcmalloc_minimal.so 00:28:53.447 1 904 libcrypto.so 00:28:53.447 ----------------------------------------------------- 00:28:53.447 00:28:53.447 14:48:01 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:28:53.447 14:48:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:53.447 14:48:01 -- common/autotest_common.sh@10 -- # set +x 00:28:53.447 14:48:01 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:53.447 14:48:01 -- ftl/fio.sh@85 -- # remove_shm 00:28:53.447 14:48:01 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:53.447 Remove shared memory files 00:28:53.447 14:48:01 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:53.447 14:48:01 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:53.447 14:48:01 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid61953 /dev/shm/spdk_tgt_trace.pid76464 00:28:53.447 14:48:01 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:53.447 14:48:02 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:53.447 ************************************ 00:28:53.447 END TEST ftl_fio_basic 00:28:53.447 ************************************ 00:28:53.447 00:28:53.447 real 1m18.083s 00:28:53.447 user 2m48.043s 00:28:53.447 sys 0m4.198s 00:28:53.447 14:48:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:53.447 14:48:02 -- common/autotest_common.sh@10 -- # set +x 00:28:53.705 14:48:02 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:28:53.705 14:48:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:28:53.705 14:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:53.705 14:48:02 -- common/autotest_common.sh@10 -- # set +x 00:28:53.705 ************************************ 00:28:53.705 START TEST ftl_bdevperf 00:28:53.705 ************************************ 00:28:53.705 14:48:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:28:53.705 * Looking for test storage... 00:28:53.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:53.705 14:48:02 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:53.705 14:48:02 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:28:53.705 14:48:02 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:53.705 14:48:02 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:53.705 14:48:02 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:53.706 14:48:02 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:53.706 14:48:02 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:53.706 14:48:02 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:53.706 14:48:02 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:53.706 14:48:02 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:53.706 14:48:02 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:53.706 14:48:02 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:53.706 14:48:02 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:53.706 14:48:02 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:53.706 14:48:02 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:53.706 14:48:02 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:53.706 14:48:02 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:53.706 14:48:02 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:53.706 14:48:02 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:53.706 14:48:02 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:53.706 14:48:02 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:53.706 14:48:02 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:53.706 14:48:02 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:53.706 14:48:02 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:53.706 14:48:02 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:53.706 14:48:02 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:53.706 14:48:02 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:53.706 14:48:02 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:53.706 14:48:02 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:53.706 14:48:02 -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:28:53.706 14:48:02 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:28:53.706 14:48:02 -- ftl/bdevperf.sh@13 -- # use_append= 00:28:53.706 14:48:02 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:53.706 14:48:02 -- ftl/bdevperf.sh@15 -- # timeout=240 00:28:53.706 14:48:02 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:28:53.706 14:48:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:53.706 14:48:02 -- common/autotest_common.sh@10 -- # set +x 00:28:53.706 14:48:02 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=78554 00:28:53.706 14:48:02 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:28:53.706 14:48:02 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:28:53.706 14:48:02 -- ftl/bdevperf.sh@22 -- # waitforlisten 78554 00:28:53.706 14:48:02 -- common/autotest_common.sh@817 -- # '[' -z 78554 ']' 00:28:53.706 14:48:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.706 14:48:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:53.706 14:48:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.706 14:48:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:53.706 14:48:02 -- common/autotest_common.sh@10 -- # set +x 00:28:53.964 [2024-04-17 14:48:02.370858] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:28:53.964 [2024-04-17 14:48:02.371212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78554 ] 00:28:53.964 [2024-04-17 14:48:02.540999] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.239 [2024-04-17 14:48:02.837764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.805 14:48:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:54.805 14:48:03 -- common/autotest_common.sh@850 -- # return 0 00:28:54.805 14:48:03 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:54.805 14:48:03 -- ftl/common.sh@54 -- # local name=nvme0 00:28:54.805 14:48:03 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:54.805 14:48:03 -- ftl/common.sh@56 -- # local size=103424 00:28:54.805 14:48:03 -- ftl/common.sh@59 -- # local base_bdev 00:28:54.805 14:48:03 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:55.065 14:48:03 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:55.065 14:48:03 -- ftl/common.sh@62 -- # local base_size 00:28:55.065 14:48:03 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:55.065 14:48:03 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:28:55.065 14:48:03 -- common/autotest_common.sh@1365 -- # local bdev_info 00:28:55.065 14:48:03 -- common/autotest_common.sh@1366 -- # local bs 00:28:55.065 14:48:03 -- common/autotest_common.sh@1367 -- # local nb 00:28:55.065 14:48:03 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:55.325 14:48:03 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:28:55.325 { 00:28:55.325 "name": "nvme0n1", 00:28:55.325 "aliases": [ 00:28:55.325 "6f46280a-4763-4747-88a1-8815d0c91db7" 00:28:55.325 ], 00:28:55.325 "product_name": "NVMe disk", 00:28:55.325 "block_size": 4096, 00:28:55.325 "num_blocks": 1310720, 00:28:55.325 "uuid": "6f46280a-4763-4747-88a1-8815d0c91db7", 00:28:55.325 "assigned_rate_limits": { 00:28:55.325 "rw_ios_per_sec": 0, 00:28:55.325 "rw_mbytes_per_sec": 0, 00:28:55.325 "r_mbytes_per_sec": 0, 00:28:55.325 "w_mbytes_per_sec": 0 00:28:55.325 }, 00:28:55.325 "claimed": true, 00:28:55.325 "claim_type": "read_many_write_one", 00:28:55.325 "zoned": false, 00:28:55.325 "supported_io_types": { 00:28:55.325 "read": true, 00:28:55.325 "write": true, 00:28:55.325 "unmap": true, 00:28:55.325 "write_zeroes": true, 00:28:55.325 "flush": true, 00:28:55.325 "reset": true, 00:28:55.325 "compare": true, 00:28:55.325 "compare_and_write": false, 00:28:55.325 "abort": true, 00:28:55.325 "nvme_admin": true, 00:28:55.325 "nvme_io": true 00:28:55.325 }, 00:28:55.325 "driver_specific": { 00:28:55.325 "nvme": [ 00:28:55.325 { 00:28:55.325 "pci_address": "0000:00:11.0", 00:28:55.325 "trid": { 00:28:55.325 "trtype": "PCIe", 00:28:55.325 "traddr": "0000:00:11.0" 00:28:55.325 }, 00:28:55.325 "ctrlr_data": { 00:28:55.325 "cntlid": 0, 00:28:55.325 "vendor_id": "0x1b36", 00:28:55.325 "model_number": "QEMU NVMe Ctrl", 00:28:55.325 "serial_number": "12341", 00:28:55.325 "firmware_revision": "8.0.0", 00:28:55.325 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:55.325 "oacs": { 00:28:55.325 "security": 0, 00:28:55.325 "format": 1, 00:28:55.325 "firmware": 0, 00:28:55.325 "ns_manage": 1 00:28:55.325 }, 00:28:55.325 "multi_ctrlr": false, 00:28:55.325 "ana_reporting": false 00:28:55.325 }, 00:28:55.325 "vs": { 00:28:55.325 "nvme_version": "1.4" 00:28:55.325 }, 00:28:55.325 "ns_data": { 00:28:55.325 "id": 1, 00:28:55.325 "can_share": false 00:28:55.325 } 00:28:55.325 } 00:28:55.325 ], 00:28:55.325 "mp_policy": "active_passive" 00:28:55.325 } 00:28:55.325 } 00:28:55.325 ]' 00:28:55.325 14:48:03 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:28:55.585 14:48:03 -- common/autotest_common.sh@1369 -- # bs=4096 00:28:55.585 14:48:03 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:28:55.585 14:48:03 -- common/autotest_common.sh@1370 -- # nb=1310720 00:28:55.585 14:48:03 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:28:55.585 14:48:03 -- common/autotest_common.sh@1374 -- # echo 5120 00:28:55.585 14:48:03 -- ftl/common.sh@63 -- # base_size=5120 00:28:55.585 14:48:03 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:55.585 14:48:03 -- ftl/common.sh@67 -- # clear_lvols 00:28:55.585 14:48:03 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:55.585 14:48:04 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:55.854 14:48:04 -- ftl/common.sh@28 -- # stores=dae0c005-611c-4161-ba1c-a71128898676 00:28:55.854 14:48:04 -- ftl/common.sh@29 -- # for lvs in $stores 00:28:55.854 14:48:04 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dae0c005-611c-4161-ba1c-a71128898676 00:28:56.113 14:48:04 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:56.371 14:48:04 -- ftl/common.sh@68 -- # lvs=37b75a3d-360a-44c3-9706-47880e510e66 00:28:56.371 14:48:04 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 37b75a3d-360a-44c3-9706-47880e510e66 00:28:56.629 14:48:05 -- ftl/bdevperf.sh@23 -- # split_bdev=aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:56.629 14:48:05 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:56.629 14:48:05 -- ftl/common.sh@35 -- # local name=nvc0 00:28:56.629 14:48:05 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:56.629 14:48:05 -- ftl/common.sh@37 -- # local base_bdev=aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:56.629 14:48:05 -- ftl/common.sh@38 -- # local cache_size= 00:28:56.629 14:48:05 -- ftl/common.sh@41 -- # get_bdev_size aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:56.629 14:48:05 -- common/autotest_common.sh@1364 -- # local bdev_name=aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:56.629 14:48:05 -- common/autotest_common.sh@1365 -- # local bdev_info 00:28:56.629 14:48:05 -- common/autotest_common.sh@1366 -- # local bs 00:28:56.629 14:48:05 -- common/autotest_common.sh@1367 -- # local nb 00:28:56.629 14:48:05 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:56.915 14:48:05 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:28:56.915 { 00:28:56.915 "name": "aa18d8f4-5b12-4377-8791-2ae4fb0d305f", 00:28:56.915 "aliases": [ 00:28:56.915 "lvs/nvme0n1p0" 00:28:56.915 ], 00:28:56.915 "product_name": "Logical Volume", 00:28:56.915 "block_size": 4096, 00:28:56.915 "num_blocks": 26476544, 00:28:56.915 "uuid": "aa18d8f4-5b12-4377-8791-2ae4fb0d305f", 00:28:56.915 "assigned_rate_limits": { 00:28:56.915 "rw_ios_per_sec": 0, 00:28:56.915 "rw_mbytes_per_sec": 0, 00:28:56.915 "r_mbytes_per_sec": 0, 00:28:56.915 "w_mbytes_per_sec": 0 00:28:56.915 }, 00:28:56.915 "claimed": false, 00:28:56.915 "zoned": false, 00:28:56.915 "supported_io_types": { 00:28:56.915 "read": true, 00:28:56.915 "write": true, 00:28:56.915 "unmap": true, 00:28:56.915 "write_zeroes": true, 00:28:56.915 "flush": false, 00:28:56.915 "reset": true, 00:28:56.915 "compare": false, 00:28:56.915 "compare_and_write": false, 00:28:56.915 "abort": false, 00:28:56.915 "nvme_admin": false, 00:28:56.915 "nvme_io": false 00:28:56.915 }, 00:28:56.915 "driver_specific": { 00:28:56.915 "lvol": { 00:28:56.915 "lvol_store_uuid": "37b75a3d-360a-44c3-9706-47880e510e66", 00:28:56.915 "base_bdev": "nvme0n1", 00:28:56.915 "thin_provision": true, 00:28:56.915 "snapshot": false, 00:28:56.915 "clone": false, 00:28:56.915 "esnap_clone": false 00:28:56.915 } 00:28:56.915 } 00:28:56.915 } 00:28:56.915 ]' 00:28:56.915 14:48:05 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:28:56.915 14:48:05 -- common/autotest_common.sh@1369 -- # bs=4096 00:28:56.915 14:48:05 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:28:56.915 14:48:05 -- common/autotest_common.sh@1370 -- # nb=26476544 00:28:56.915 14:48:05 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:28:56.915 14:48:05 -- common/autotest_common.sh@1374 -- # echo 103424 00:28:56.915 14:48:05 -- ftl/common.sh@41 -- # local base_size=5171 00:28:56.915 14:48:05 -- ftl/common.sh@44 -- # local nvc_bdev 00:28:56.915 14:48:05 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:57.186 14:48:05 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:57.186 14:48:05 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:57.186 14:48:05 -- ftl/common.sh@48 -- # get_bdev_size aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:57.186 14:48:05 -- common/autotest_common.sh@1364 -- # local bdev_name=aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:57.186 14:48:05 -- common/autotest_common.sh@1365 -- # local bdev_info 00:28:57.186 14:48:05 -- common/autotest_common.sh@1366 -- # local bs 00:28:57.186 14:48:05 -- common/autotest_common.sh@1367 -- # local nb 00:28:57.186 14:48:05 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:57.445 14:48:05 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:28:57.445 { 00:28:57.445 "name": "aa18d8f4-5b12-4377-8791-2ae4fb0d305f", 00:28:57.445 "aliases": [ 00:28:57.445 "lvs/nvme0n1p0" 00:28:57.445 ], 00:28:57.445 "product_name": "Logical Volume", 00:28:57.445 "block_size": 4096, 00:28:57.445 "num_blocks": 26476544, 00:28:57.445 "uuid": "aa18d8f4-5b12-4377-8791-2ae4fb0d305f", 00:28:57.445 "assigned_rate_limits": { 00:28:57.445 "rw_ios_per_sec": 0, 00:28:57.445 "rw_mbytes_per_sec": 0, 00:28:57.446 "r_mbytes_per_sec": 0, 00:28:57.446 "w_mbytes_per_sec": 0 00:28:57.446 }, 00:28:57.446 "claimed": false, 00:28:57.446 "zoned": false, 00:28:57.446 "supported_io_types": { 00:28:57.446 "read": true, 00:28:57.446 "write": true, 00:28:57.446 "unmap": true, 00:28:57.446 "write_zeroes": true, 00:28:57.446 "flush": false, 00:28:57.446 "reset": true, 00:28:57.446 "compare": false, 00:28:57.446 "compare_and_write": false, 00:28:57.446 "abort": false, 00:28:57.446 "nvme_admin": false, 00:28:57.446 "nvme_io": false 00:28:57.446 }, 00:28:57.446 "driver_specific": { 00:28:57.446 "lvol": { 00:28:57.446 "lvol_store_uuid": "37b75a3d-360a-44c3-9706-47880e510e66", 00:28:57.446 "base_bdev": "nvme0n1", 00:28:57.446 "thin_provision": true, 00:28:57.446 "snapshot": false, 00:28:57.446 "clone": false, 00:28:57.446 "esnap_clone": false 00:28:57.446 } 00:28:57.446 } 00:28:57.446 } 00:28:57.446 ]' 00:28:57.446 14:48:05 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:28:57.446 14:48:05 -- common/autotest_common.sh@1369 -- # bs=4096 00:28:57.446 14:48:05 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:28:57.446 14:48:06 -- common/autotest_common.sh@1370 -- # nb=26476544 00:28:57.446 14:48:06 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:28:57.446 14:48:06 -- common/autotest_common.sh@1374 -- # echo 103424 00:28:57.446 14:48:06 -- ftl/common.sh@48 -- # cache_size=5171 00:28:57.446 14:48:06 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:57.704 14:48:06 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:28:57.704 14:48:06 -- ftl/bdevperf.sh@26 -- # get_bdev_size aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:57.705 14:48:06 -- common/autotest_common.sh@1364 -- # local bdev_name=aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:57.705 14:48:06 -- common/autotest_common.sh@1365 -- # local bdev_info 00:28:57.705 14:48:06 -- common/autotest_common.sh@1366 -- # local bs 00:28:57.705 14:48:06 -- common/autotest_common.sh@1367 -- # local nb 00:28:57.705 14:48:06 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aa18d8f4-5b12-4377-8791-2ae4fb0d305f 00:28:57.983 14:48:06 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:28:57.983 { 00:28:57.983 "name": "aa18d8f4-5b12-4377-8791-2ae4fb0d305f", 00:28:57.983 "aliases": [ 00:28:57.983 "lvs/nvme0n1p0" 00:28:57.983 ], 00:28:57.983 "product_name": "Logical Volume", 00:28:57.983 "block_size": 4096, 00:28:57.983 "num_blocks": 26476544, 00:28:57.983 "uuid": "aa18d8f4-5b12-4377-8791-2ae4fb0d305f", 00:28:57.983 "assigned_rate_limits": { 00:28:57.983 "rw_ios_per_sec": 0, 00:28:57.983 "rw_mbytes_per_sec": 0, 00:28:57.983 "r_mbytes_per_sec": 0, 00:28:57.983 "w_mbytes_per_sec": 0 00:28:57.983 }, 00:28:57.983 "claimed": false, 00:28:57.983 "zoned": false, 00:28:57.983 "supported_io_types": { 00:28:57.983 "read": true, 00:28:57.983 "write": true, 00:28:57.983 "unmap": true, 00:28:57.983 "write_zeroes": true, 00:28:57.983 "flush": false, 00:28:57.983 "reset": true, 00:28:57.983 "compare": false, 00:28:57.983 "compare_and_write": false, 00:28:57.983 "abort": false, 00:28:57.983 "nvme_admin": false, 00:28:57.983 "nvme_io": false 00:28:57.983 }, 00:28:57.983 "driver_specific": { 00:28:57.983 "lvol": { 00:28:57.983 "lvol_store_uuid": "37b75a3d-360a-44c3-9706-47880e510e66", 00:28:57.983 "base_bdev": "nvme0n1", 00:28:57.983 "thin_provision": true, 00:28:57.983 "snapshot": false, 00:28:57.983 "clone": false, 00:28:57.983 "esnap_clone": false 00:28:57.983 } 00:28:57.983 } 00:28:57.983 } 00:28:57.983 ]' 00:28:57.983 14:48:06 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:28:57.983 14:48:06 -- common/autotest_common.sh@1369 -- # bs=4096 00:28:57.983 14:48:06 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:28:57.983 14:48:06 -- common/autotest_common.sh@1370 -- # nb=26476544 00:28:57.983 14:48:06 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:28:57.983 14:48:06 -- common/autotest_common.sh@1374 -- # echo 103424 00:28:57.983 14:48:06 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:28:57.983 14:48:06 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aa18d8f4-5b12-4377-8791-2ae4fb0d305f -c nvc0n1p0 --l2p_dram_limit 20 00:28:58.242 [2024-04-17 14:48:06.729794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.242 [2024-04-17 14:48:06.730090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:58.242 [2024-04-17 14:48:06.730222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:58.242 [2024-04-17 14:48:06.730274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.242 [2024-04-17 14:48:06.730524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.242 [2024-04-17 14:48:06.730586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:58.242 [2024-04-17 14:48:06.730704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:28:58.242 [2024-04-17 14:48:06.730759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.242 [2024-04-17 14:48:06.730834] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:58.242 [2024-04-17 14:48:06.732506] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:58.242 [2024-04-17 14:48:06.732665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.242 [2024-04-17 14:48:06.732751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:58.242 [2024-04-17 14:48:06.732829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.851 ms 00:28:58.242 [2024-04-17 14:48:06.732875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.242 [2024-04-17 14:48:06.733082] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d8856f79-8af2-4094-8d59-77731ec05054 00:28:58.242 [2024-04-17 14:48:06.734785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.242 [2024-04-17 14:48:06.734923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:58.242 [2024-04-17 14:48:06.735017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:58.242 [2024-04-17 14:48:06.735107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.242 [2024-04-17 14:48:06.743133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.242 [2024-04-17 14:48:06.743382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:58.242 [2024-04-17 14:48:06.743476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.907 ms 00:28:58.242 [2024-04-17 14:48:06.743537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.242 [2024-04-17 14:48:06.743695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.242 [2024-04-17 14:48:06.743738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:58.242 [2024-04-17 14:48:06.743777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:28:58.242 [2024-04-17 14:48:06.743874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.242 [2024-04-17 14:48:06.743994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.243 [2024-04-17 14:48:06.744036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:58.243 [2024-04-17 14:48:06.744128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:58.243 [2024-04-17 14:48:06.744168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.243 [2024-04-17 14:48:06.744263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:58.243 [2024-04-17 14:48:06.751144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.243 [2024-04-17 14:48:06.751357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:58.243 [2024-04-17 14:48:06.751447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.891 ms 00:28:58.243 [2024-04-17 14:48:06.751501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.243 [2024-04-17 14:48:06.751580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.243 [2024-04-17 14:48:06.751652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:58.243 [2024-04-17 14:48:06.751690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:58.243 [2024-04-17 14:48:06.751727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.243 [2024-04-17 14:48:06.751801] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:58.243 [2024-04-17 14:48:06.752115] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:28:58.243 [2024-04-17 14:48:06.752250] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:58.243 [2024-04-17 14:48:06.752362] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:28:58.243 [2024-04-17 14:48:06.752463] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:58.243 [2024-04-17 14:48:06.752555] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:58.243 [2024-04-17 14:48:06.752655] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:58.243 [2024-04-17 14:48:06.752736] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:58.243 [2024-04-17 14:48:06.752784] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:28:58.243 [2024-04-17 14:48:06.752870] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:28:58.243 [2024-04-17 14:48:06.752917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.243 [2024-04-17 14:48:06.752963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:58.243 [2024-04-17 14:48:06.753032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.118 ms 00:28:58.243 [2024-04-17 14:48:06.753113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.243 [2024-04-17 14:48:06.753224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.243 [2024-04-17 14:48:06.753274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:58.243 [2024-04-17 14:48:06.753354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:58.243 [2024-04-17 14:48:06.753400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.243 [2024-04-17 14:48:06.753560] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:58.243 [2024-04-17 14:48:06.753610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:58.243 [2024-04-17 14:48:06.753710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:58.243 [2024-04-17 14:48:06.753765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.243 [2024-04-17 14:48:06.753842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:58.243 [2024-04-17 14:48:06.753887] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:58.243 [2024-04-17 14:48:06.753922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:58.243 [2024-04-17 14:48:06.754012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:58.243 [2024-04-17 14:48:06.754070] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:58.243 [2024-04-17 14:48:06.754110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:58.243 [2024-04-17 14:48:06.754169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:58.243 [2024-04-17 14:48:06.754248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:58.243 [2024-04-17 14:48:06.754294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:58.243 [2024-04-17 14:48:06.754429] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:58.243 [2024-04-17 14:48:06.754473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:28:58.243 [2024-04-17 14:48:06.754529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.243 [2024-04-17 14:48:06.754578] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:58.243 [2024-04-17 14:48:06.754659] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:28:58.243 [2024-04-17 14:48:06.754699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.243 [2024-04-17 14:48:06.754739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:28:58.243 [2024-04-17 14:48:06.754785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:28:58.243 [2024-04-17 14:48:06.754822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:28:58.243 [2024-04-17 14:48:06.754856] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:58.243 [2024-04-17 14:48:06.754892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:58.243 [2024-04-17 14:48:06.754925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:28:58.243 [2024-04-17 14:48:06.754988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:58.243 [2024-04-17 14:48:06.755023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:28:58.243 [2024-04-17 14:48:06.755058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:28:58.243 [2024-04-17 14:48:06.755092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:58.243 [2024-04-17 14:48:06.755128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:58.243 [2024-04-17 14:48:06.755197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:28:58.243 [2024-04-17 14:48:06.755233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:58.243 [2024-04-17 14:48:06.755267] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:28:58.243 [2024-04-17 14:48:06.755303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:28:58.243 [2024-04-17 14:48:06.755337] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:58.243 [2024-04-17 14:48:06.755403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:58.243 [2024-04-17 14:48:06.755437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:58.243 [2024-04-17 14:48:06.755473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:58.243 [2024-04-17 14:48:06.755518] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:28:58.243 [2024-04-17 14:48:06.755558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:58.243 [2024-04-17 14:48:06.755612] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:58.243 [2024-04-17 14:48:06.755650] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:58.243 [2024-04-17 14:48:06.755685] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:58.243 [2024-04-17 14:48:06.755722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.243 [2024-04-17 14:48:06.755763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:58.243 [2024-04-17 14:48:06.755822] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:58.243 [2024-04-17 14:48:06.755863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:58.243 [2024-04-17 14:48:06.755903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:58.243 [2024-04-17 14:48:06.755941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:58.243 [2024-04-17 14:48:06.756021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:58.243 [2024-04-17 14:48:06.756063] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:58.243 [2024-04-17 14:48:06.756137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.243 [2024-04-17 14:48:06.756295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:58.243 [2024-04-17 14:48:06.756419] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:28:58.244 [2024-04-17 14:48:06.756485] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:28:58.244 [2024-04-17 14:48:06.756607] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:28:58.244 [2024-04-17 14:48:06.756671] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:28:58.244 [2024-04-17 14:48:06.756774] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:28:58.244 [2024-04-17 14:48:06.756872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:28:58.244 [2024-04-17 14:48:06.756991] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:28:58.244 [2024-04-17 14:48:06.757088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:28:58.244 [2024-04-17 14:48:06.757191] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:28:58.244 [2024-04-17 14:48:06.757290] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:28:58.244 [2024-04-17 14:48:06.757390] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:28:58.244 [2024-04-17 14:48:06.757561] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:28:58.244 [2024-04-17 14:48:06.757683] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:58.244 [2024-04-17 14:48:06.757799] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.244 [2024-04-17 14:48:06.757873] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:58.244 [2024-04-17 14:48:06.757992] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:58.244 [2024-04-17 14:48:06.758061] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:58.244 [2024-04-17 14:48:06.758186] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:58.244 [2024-04-17 14:48:06.758257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.244 [2024-04-17 14:48:06.758352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:58.244 [2024-04-17 14:48:06.758395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.734 ms 00:28:58.244 [2024-04-17 14:48:06.758475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.244 [2024-04-17 14:48:06.787191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.244 [2024-04-17 14:48:06.787504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:58.244 [2024-04-17 14:48:06.787670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.548 ms 00:28:58.244 [2024-04-17 14:48:06.787711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.244 [2024-04-17 14:48:06.787843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.244 [2024-04-17 14:48:06.787879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:58.244 [2024-04-17 14:48:06.787956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:58.244 [2024-04-17 14:48:06.787997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.502 [2024-04-17 14:48:06.857372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.502 [2024-04-17 14:48:06.857640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:58.502 [2024-04-17 14:48:06.857809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.265 ms 00:28:58.502 [2024-04-17 14:48:06.857854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.502 [2024-04-17 14:48:06.857937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.502 [2024-04-17 14:48:06.858006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:58.502 [2024-04-17 14:48:06.858048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:58.502 [2024-04-17 14:48:06.858082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.502 [2024-04-17 14:48:06.858694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.502 [2024-04-17 14:48:06.858816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:58.502 [2024-04-17 14:48:06.858902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:28:58.502 [2024-04-17 14:48:06.858941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.502 [2024-04-17 14:48:06.859093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.502 [2024-04-17 14:48:06.859134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:58.502 [2024-04-17 14:48:06.859217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:28:58.502 [2024-04-17 14:48:06.859256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.502 [2024-04-17 14:48:06.886382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.502 [2024-04-17 14:48:06.886645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:58.502 [2024-04-17 14:48:06.886748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.007 ms 00:28:58.502 [2024-04-17 14:48:06.886797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.502 [2024-04-17 14:48:06.905351] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:28:58.502 [2024-04-17 14:48:06.912236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.502 [2024-04-17 14:48:06.912513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:58.502 [2024-04-17 14:48:06.912657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.282 ms 00:28:58.502 [2024-04-17 14:48:06.912706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.502 [2024-04-17 14:48:06.989759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.502 [2024-04-17 14:48:06.990076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:58.502 [2024-04-17 14:48:06.990181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.973 ms 00:28:58.502 [2024-04-17 14:48:06.990228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.502 [2024-04-17 14:48:06.990322] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:28:58.502 [2024-04-17 14:48:06.990477] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:29:01.029 [2024-04-17 14:48:09.400533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.030 [2024-04-17 14:48:09.400865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:01.030 [2024-04-17 14:48:09.400997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2410.197 ms 00:29:01.030 [2024-04-17 14:48:09.401059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.030 [2024-04-17 14:48:09.401411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.030 [2024-04-17 14:48:09.401566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:01.030 [2024-04-17 14:48:09.401659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:29:01.030 [2024-04-17 14:48:09.401756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.030 [2024-04-17 14:48:09.447274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.030 [2024-04-17 14:48:09.447591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:01.030 [2024-04-17 14:48:09.447702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.405 ms 00:29:01.030 [2024-04-17 14:48:09.447750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.030 [2024-04-17 14:48:09.492923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.030 [2024-04-17 14:48:09.493219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:01.030 [2024-04-17 14:48:09.493337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.088 ms 00:29:01.030 [2024-04-17 14:48:09.493378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.030 [2024-04-17 14:48:09.493914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.030 [2024-04-17 14:48:09.494059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:01.030 [2024-04-17 14:48:09.494157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:29:01.030 [2024-04-17 14:48:09.494204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.030 [2024-04-17 14:48:09.603626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.030 [2024-04-17 14:48:09.603909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:01.030 [2024-04-17 14:48:09.604000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.309 ms 00:29:01.030 [2024-04-17 14:48:09.604045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.289 [2024-04-17 14:48:09.650736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.289 [2024-04-17 14:48:09.650999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:01.289 [2024-04-17 14:48:09.651092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.598 ms 00:29:01.289 [2024-04-17 14:48:09.651137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.289 [2024-04-17 14:48:09.653590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.289 [2024-04-17 14:48:09.653735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:29:01.289 [2024-04-17 14:48:09.653816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.368 ms 00:29:01.289 [2024-04-17 14:48:09.653864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.289 [2024-04-17 14:48:09.699373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.289 [2024-04-17 14:48:09.699692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:01.289 [2024-04-17 14:48:09.699782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.397 ms 00:29:01.289 [2024-04-17 14:48:09.699826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.289 [2024-04-17 14:48:09.699913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.289 [2024-04-17 14:48:09.699961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:01.289 [2024-04-17 14:48:09.700047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:01.289 [2024-04-17 14:48:09.700089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.289 [2024-04-17 14:48:09.700242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.289 [2024-04-17 14:48:09.700377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:01.289 [2024-04-17 14:48:09.700456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:01.289 [2024-04-17 14:48:09.700519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.289 [2024-04-17 14:48:09.701718] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2971.410 ms, result 0 00:29:01.289 { 00:29:01.289 "name": "ftl0", 00:29:01.289 "uuid": "d8856f79-8af2-4094-8d59-77731ec05054" 00:29:01.289 } 00:29:01.289 14:48:09 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:29:01.289 14:48:09 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:29:01.289 14:48:09 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:29:01.548 14:48:10 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:29:01.548 [2024-04-17 14:48:10.142129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:29:01.806 I/O size of 69632 is greater than zero copy threshold (65536). 00:29:01.806 Zero copy mechanism will not be used. 00:29:01.806 Running I/O for 4 seconds... 00:29:05.995 00:29:05.995 Latency(us) 00:29:05.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.995 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:29:05.995 ftl0 : 4.00 2204.67 146.40 0.00 0.00 474.97 197.00 1693.01 00:29:05.995 =================================================================================================================== 00:29:05.995 Total : 2204.67 146.40 0.00 0.00 474.97 197.00 1693.01 00:29:05.996 [2024-04-17 14:48:14.155040] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:29:05.996 0 00:29:05.996 14:48:14 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:29:05.996 [2024-04-17 14:48:14.309097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:29:05.996 Running I/O for 4 seconds... 00:29:10.184 00:29:10.184 Latency(us) 00:29:10.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.184 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.184 ftl0 : 4.02 8904.87 34.78 0.00 0.00 14338.56 265.26 35202.19 00:29:10.184 =================================================================================================================== 00:29:10.184 Total : 8904.87 34.78 0.00 0.00 14338.56 0.00 35202.19 00:29:10.184 [2024-04-17 14:48:18.341189] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:29:10.184 0 00:29:10.184 14:48:18 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:29:10.185 [2024-04-17 14:48:18.520882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:29:10.185 Running I/O for 4 seconds... 00:29:14.373 00:29:14.373 Latency(us) 00:29:14.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.373 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:14.373 Verification LBA range: start 0x0 length 0x1400000 00:29:14.373 ftl0 : 4.01 7539.87 29.45 0.00 0.00 16921.22 296.47 25090.93 00:29:14.373 =================================================================================================================== 00:29:14.373 Total : 7539.87 29.45 0.00 0.00 16921.22 0.00 25090.93 00:29:14.373 [2024-04-17 14:48:22.556341] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:29:14.373 0 00:29:14.373 14:48:22 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:29:14.373 [2024-04-17 14:48:22.758877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.373 [2024-04-17 14:48:22.759193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:14.373 [2024-04-17 14:48:22.759324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:14.373 [2024-04-17 14:48:22.759374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.373 [2024-04-17 14:48:22.759445] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:14.373 [2024-04-17 14:48:22.763928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.373 [2024-04-17 14:48:22.764126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:14.373 [2024-04-17 14:48:22.764259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:29:14.373 [2024-04-17 14:48:22.764303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.373 [2024-04-17 14:48:22.765783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.373 [2024-04-17 14:48:22.765934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:14.373 [2024-04-17 14:48:22.766038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:29:14.373 [2024-04-17 14:48:22.766080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.373 [2024-04-17 14:48:22.964534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.373 [2024-04-17 14:48:22.964800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:14.373 [2024-04-17 14:48:22.964924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 198.372 ms 00:29:14.373 [2024-04-17 14:48:22.964981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.373 [2024-04-17 14:48:22.971316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.373 [2024-04-17 14:48:22.971570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:29:14.373 [2024-04-17 14:48:22.971671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.249 ms 00:29:14.373 [2024-04-17 14:48:22.971720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.632 [2024-04-17 14:48:23.019969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.632 [2024-04-17 14:48:23.020227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:14.632 [2024-04-17 14:48:23.020337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.067 ms 00:29:14.632 [2024-04-17 14:48:23.020385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.633 [2024-04-17 14:48:23.047645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.633 [2024-04-17 14:48:23.047894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:14.633 [2024-04-17 14:48:23.048061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.059 ms 00:29:14.633 [2024-04-17 14:48:23.048105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.633 [2024-04-17 14:48:23.048420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.633 [2024-04-17 14:48:23.048584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:14.633 [2024-04-17 14:48:23.048678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:29:14.633 [2024-04-17 14:48:23.048731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.633 [2024-04-17 14:48:23.096039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.633 [2024-04-17 14:48:23.096311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:14.633 [2024-04-17 14:48:23.096464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.232 ms 00:29:14.633 [2024-04-17 14:48:23.096539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.633 [2024-04-17 14:48:23.143577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.633 [2024-04-17 14:48:23.143855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:14.633 [2024-04-17 14:48:23.143989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.924 ms 00:29:14.633 [2024-04-17 14:48:23.144041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.633 [2024-04-17 14:48:23.191401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.633 [2024-04-17 14:48:23.191686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:14.633 [2024-04-17 14:48:23.191783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.246 ms 00:29:14.633 [2024-04-17 14:48:23.191822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.893 [2024-04-17 14:48:23.240135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.893 [2024-04-17 14:48:23.240421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:14.893 [2024-04-17 14:48:23.240539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.134 ms 00:29:14.893 [2024-04-17 14:48:23.240582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.893 [2024-04-17 14:48:23.240751] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:14.893 [2024-04-17 14:48:23.240805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.240866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.240970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.241961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.242019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.242174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.242235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.242422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.242483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.242593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.242651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.242717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.242817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.242880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.242938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.243999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.244059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.244170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.244227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.244282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.244386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.244441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:14.893 [2024-04-17 14:48:23.244514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.244663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.244809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.244904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.245059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.245120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.245230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.245287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.245344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.245430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.245514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.245657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.245728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.245818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.245926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.246056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.246166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.246231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.246347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.246410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.246534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.246594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.246672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.246776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.246885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.246946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.247098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.247205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.247348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.247411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.247528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.247586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.247646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.247748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.247807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.247873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.247980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.248947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.249045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.249105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:14.894 [2024-04-17 14:48:23.249170] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:14.894 [2024-04-17 14:48:23.249218] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d8856f79-8af2-4094-8d59-77731ec05054 00:29:14.894 [2024-04-17 14:48:23.249279] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:14.894 [2024-04-17 14:48:23.249316] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:14.894 [2024-04-17 14:48:23.249350] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:14.894 [2024-04-17 14:48:23.249390] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:14.894 [2024-04-17 14:48:23.249429] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:14.894 [2024-04-17 14:48:23.249479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:14.894 [2024-04-17 14:48:23.249527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:14.894 [2024-04-17 14:48:23.249564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:14.894 [2024-04-17 14:48:23.249600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:14.894 [2024-04-17 14:48:23.249693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.895 [2024-04-17 14:48:23.249735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:14.895 [2024-04-17 14:48:23.249809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.942 ms 00:29:14.895 [2024-04-17 14:48:23.249881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.895 [2024-04-17 14:48:23.273735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.895 [2024-04-17 14:48:23.273981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:14.895 [2024-04-17 14:48:23.274079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.718 ms 00:29:14.895 [2024-04-17 14:48:23.274124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.895 [2024-04-17 14:48:23.274537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.895 [2024-04-17 14:48:23.274588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:14.895 [2024-04-17 14:48:23.274682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:29:14.895 [2024-04-17 14:48:23.274724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.895 [2024-04-17 14:48:23.341897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.895 [2024-04-17 14:48:23.342141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:14.895 [2024-04-17 14:48:23.342250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.895 [2024-04-17 14:48:23.342292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.895 [2024-04-17 14:48:23.342413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.895 [2024-04-17 14:48:23.342532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:14.895 [2024-04-17 14:48:23.342586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.895 [2024-04-17 14:48:23.342621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.895 [2024-04-17 14:48:23.342771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.895 [2024-04-17 14:48:23.342816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:14.895 [2024-04-17 14:48:23.342922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.895 [2024-04-17 14:48:23.342961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.895 [2024-04-17 14:48:23.343010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.895 [2024-04-17 14:48:23.343047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:14.895 [2024-04-17 14:48:23.343090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.895 [2024-04-17 14:48:23.343127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.895 [2024-04-17 14:48:23.483871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:14.895 [2024-04-17 14:48:23.484205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:14.895 [2024-04-17 14:48:23.484325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:14.895 [2024-04-17 14:48:23.484381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.154 [2024-04-17 14:48:23.539283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.154 [2024-04-17 14:48:23.539618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:15.154 [2024-04-17 14:48:23.539722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.154 [2024-04-17 14:48:23.539765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.154 [2024-04-17 14:48:23.539894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.154 [2024-04-17 14:48:23.540048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:15.154 [2024-04-17 14:48:23.540106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.154 [2024-04-17 14:48:23.540141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.155 [2024-04-17 14:48:23.540229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.155 [2024-04-17 14:48:23.540276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:15.155 [2024-04-17 14:48:23.540323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.155 [2024-04-17 14:48:23.540357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.155 [2024-04-17 14:48:23.540607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.155 [2024-04-17 14:48:23.540661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:15.155 [2024-04-17 14:48:23.540703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.155 [2024-04-17 14:48:23.540740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.155 [2024-04-17 14:48:23.540897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.155 [2024-04-17 14:48:23.540945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:15.155 [2024-04-17 14:48:23.540984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.155 [2024-04-17 14:48:23.541017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.155 [2024-04-17 14:48:23.541083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.155 [2024-04-17 14:48:23.541122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:15.155 [2024-04-17 14:48:23.541247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.155 [2024-04-17 14:48:23.541292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.155 [2024-04-17 14:48:23.541378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:15.155 [2024-04-17 14:48:23.541418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:15.155 [2024-04-17 14:48:23.541551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:15.155 [2024-04-17 14:48:23.541596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.155 [2024-04-17 14:48:23.541770] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 782.847 ms, result 0 00:29:15.155 true 00:29:15.155 14:48:23 -- ftl/bdevperf.sh@37 -- # killprocess 78554 00:29:15.155 14:48:23 -- common/autotest_common.sh@936 -- # '[' -z 78554 ']' 00:29:15.155 14:48:23 -- common/autotest_common.sh@940 -- # kill -0 78554 00:29:15.155 14:48:23 -- common/autotest_common.sh@941 -- # uname 00:29:15.155 14:48:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:15.155 14:48:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78554 00:29:15.155 killing process with pid 78554 00:29:15.155 Received shutdown signal, test time was about 4.000000 seconds 00:29:15.155 00:29:15.155 Latency(us) 00:29:15.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.155 =================================================================================================================== 00:29:15.155 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.155 14:48:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:15.155 14:48:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:15.155 14:48:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78554' 00:29:15.155 14:48:23 -- common/autotest_common.sh@955 -- # kill 78554 00:29:15.155 14:48:23 -- common/autotest_common.sh@960 -- # wait 78554 00:29:17.788 14:48:26 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:29:17.788 14:48:26 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:29:17.788 14:48:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:17.788 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:17.788 Remove shared memory files 00:29:17.788 14:48:26 -- ftl/bdevperf.sh@41 -- # remove_shm 00:29:17.788 14:48:26 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:17.788 14:48:26 -- ftl/common.sh@205 -- # rm -f rm -f 00:29:17.788 14:48:26 -- ftl/common.sh@206 -- # rm -f rm -f 00:29:17.788 14:48:26 -- ftl/common.sh@207 -- # rm -f rm -f 00:29:18.047 14:48:26 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:18.047 14:48:26 -- ftl/common.sh@209 -- # rm -f rm -f 00:29:18.047 ************************************ 00:29:18.047 END TEST ftl_bdevperf 00:29:18.047 ************************************ 00:29:18.047 00:29:18.047 real 0m24.266s 00:29:18.047 user 0m27.276s 00:29:18.047 sys 0m1.311s 00:29:18.047 14:48:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:18.047 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:18.047 14:48:26 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:29:18.047 14:48:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:29:18.047 14:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:18.047 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:18.047 ************************************ 00:29:18.047 START TEST ftl_trim 00:29:18.047 ************************************ 00:29:18.047 14:48:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:29:18.047 * Looking for test storage... 00:29:18.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:18.047 14:48:26 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:18.047 14:48:26 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:29:18.047 14:48:26 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:18.047 14:48:26 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:18.047 14:48:26 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:18.047 14:48:26 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:18.047 14:48:26 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:18.047 14:48:26 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:18.047 14:48:26 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:18.047 14:48:26 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:18.047 14:48:26 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:18.047 14:48:26 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:18.047 14:48:26 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:18.047 14:48:26 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:18.047 14:48:26 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:18.047 14:48:26 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:18.047 14:48:26 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:18.048 14:48:26 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:18.048 14:48:26 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:18.048 14:48:26 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:18.048 14:48:26 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:18.048 14:48:26 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:18.048 14:48:26 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:18.048 14:48:26 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:18.048 14:48:26 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:18.048 14:48:26 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:18.048 14:48:26 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:18.048 14:48:26 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:18.048 14:48:26 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:18.048 14:48:26 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:18.048 14:48:26 -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:29:18.048 14:48:26 -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:29:18.048 14:48:26 -- ftl/trim.sh@25 -- # timeout=240 00:29:18.048 14:48:26 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:29:18.048 14:48:26 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:29:18.048 14:48:26 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:29:18.048 14:48:26 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:29:18.048 14:48:26 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:29:18.048 14:48:26 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:18.048 14:48:26 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:18.048 14:48:26 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:29:18.048 14:48:26 -- ftl/trim.sh@40 -- # svcpid=78924 00:29:18.048 14:48:26 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:29:18.048 14:48:26 -- ftl/trim.sh@41 -- # waitforlisten 78924 00:29:18.048 14:48:26 -- common/autotest_common.sh@817 -- # '[' -z 78924 ']' 00:29:18.048 14:48:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.048 14:48:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:18.048 14:48:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.048 14:48:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:18.048 14:48:26 -- common/autotest_common.sh@10 -- # set +x 00:29:18.307 [2024-04-17 14:48:26.765483] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:29:18.307 [2024-04-17 14:48:26.765883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78924 ] 00:29:18.565 [2024-04-17 14:48:26.949026] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:18.825 [2024-04-17 14:48:27.195704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.825 [2024-04-17 14:48:27.195818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.825 [2024-04-17 14:48:27.195834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.782 14:48:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:19.782 14:48:28 -- common/autotest_common.sh@850 -- # return 0 00:29:19.782 14:48:28 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:19.782 14:48:28 -- ftl/common.sh@54 -- # local name=nvme0 00:29:19.782 14:48:28 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:19.782 14:48:28 -- ftl/common.sh@56 -- # local size=103424 00:29:19.782 14:48:28 -- ftl/common.sh@59 -- # local base_bdev 00:29:19.782 14:48:28 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:20.350 14:48:28 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:20.350 14:48:28 -- ftl/common.sh@62 -- # local base_size 00:29:20.350 14:48:28 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:20.350 14:48:28 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:29:20.350 14:48:28 -- common/autotest_common.sh@1365 -- # local bdev_info 00:29:20.350 14:48:28 -- common/autotest_common.sh@1366 -- # local bs 00:29:20.350 14:48:28 -- common/autotest_common.sh@1367 -- # local nb 00:29:20.350 14:48:28 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:20.350 14:48:28 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:29:20.350 { 00:29:20.350 "name": "nvme0n1", 00:29:20.350 "aliases": [ 00:29:20.350 "81ba765c-35cd-48b2-9654-3e76f404070a" 00:29:20.350 ], 00:29:20.350 "product_name": "NVMe disk", 00:29:20.350 "block_size": 4096, 00:29:20.350 "num_blocks": 1310720, 00:29:20.350 "uuid": "81ba765c-35cd-48b2-9654-3e76f404070a", 00:29:20.350 "assigned_rate_limits": { 00:29:20.350 "rw_ios_per_sec": 0, 00:29:20.350 "rw_mbytes_per_sec": 0, 00:29:20.350 "r_mbytes_per_sec": 0, 00:29:20.350 "w_mbytes_per_sec": 0 00:29:20.350 }, 00:29:20.350 "claimed": true, 00:29:20.350 "claim_type": "read_many_write_one", 00:29:20.350 "zoned": false, 00:29:20.350 "supported_io_types": { 00:29:20.350 "read": true, 00:29:20.350 "write": true, 00:29:20.350 "unmap": true, 00:29:20.350 "write_zeroes": true, 00:29:20.350 "flush": true, 00:29:20.350 "reset": true, 00:29:20.350 "compare": true, 00:29:20.350 "compare_and_write": false, 00:29:20.350 "abort": true, 00:29:20.350 "nvme_admin": true, 00:29:20.350 "nvme_io": true 00:29:20.350 }, 00:29:20.350 "driver_specific": { 00:29:20.350 "nvme": [ 00:29:20.350 { 00:29:20.350 "pci_address": "0000:00:11.0", 00:29:20.350 "trid": { 00:29:20.350 "trtype": "PCIe", 00:29:20.350 "traddr": "0000:00:11.0" 00:29:20.350 }, 00:29:20.350 "ctrlr_data": { 00:29:20.350 "cntlid": 0, 00:29:20.350 "vendor_id": "0x1b36", 00:29:20.350 "model_number": "QEMU NVMe Ctrl", 00:29:20.350 "serial_number": "12341", 00:29:20.350 "firmware_revision": "8.0.0", 00:29:20.350 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:20.350 "oacs": { 00:29:20.350 "security": 0, 00:29:20.350 "format": 1, 00:29:20.350 "firmware": 0, 00:29:20.350 "ns_manage": 1 00:29:20.350 }, 00:29:20.350 "multi_ctrlr": false, 00:29:20.350 "ana_reporting": false 00:29:20.350 }, 00:29:20.350 "vs": { 00:29:20.350 "nvme_version": "1.4" 00:29:20.350 }, 00:29:20.350 "ns_data": { 00:29:20.350 "id": 1, 00:29:20.350 "can_share": false 00:29:20.350 } 00:29:20.350 } 00:29:20.350 ], 00:29:20.350 "mp_policy": "active_passive" 00:29:20.350 } 00:29:20.350 } 00:29:20.350 ]' 00:29:20.350 14:48:28 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:29:20.680 14:48:28 -- common/autotest_common.sh@1369 -- # bs=4096 00:29:20.680 14:48:28 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:29:20.680 14:48:29 -- common/autotest_common.sh@1370 -- # nb=1310720 00:29:20.680 14:48:29 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:29:20.680 14:48:29 -- common/autotest_common.sh@1374 -- # echo 5120 00:29:20.680 14:48:29 -- ftl/common.sh@63 -- # base_size=5120 00:29:20.680 14:48:29 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:20.680 14:48:29 -- ftl/common.sh@67 -- # clear_lvols 00:29:20.680 14:48:29 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:20.680 14:48:29 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:20.680 14:48:29 -- ftl/common.sh@28 -- # stores=37b75a3d-360a-44c3-9706-47880e510e66 00:29:20.680 14:48:29 -- ftl/common.sh@29 -- # for lvs in $stores 00:29:20.680 14:48:29 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37b75a3d-360a-44c3-9706-47880e510e66 00:29:20.963 14:48:29 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:21.245 14:48:29 -- ftl/common.sh@68 -- # lvs=8dae4429-d930-4c54-bd92-f46b8049ad7c 00:29:21.246 14:48:29 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8dae4429-d930-4c54-bd92-f46b8049ad7c 00:29:21.508 14:48:29 -- ftl/trim.sh@43 -- # split_bdev=257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:21.508 14:48:29 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:21.508 14:48:29 -- ftl/common.sh@35 -- # local name=nvc0 00:29:21.508 14:48:29 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:21.508 14:48:29 -- ftl/common.sh@37 -- # local base_bdev=257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:21.508 14:48:29 -- ftl/common.sh@38 -- # local cache_size= 00:29:21.508 14:48:29 -- ftl/common.sh@41 -- # get_bdev_size 257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:21.508 14:48:29 -- common/autotest_common.sh@1364 -- # local bdev_name=257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:21.508 14:48:29 -- common/autotest_common.sh@1365 -- # local bdev_info 00:29:21.508 14:48:29 -- common/autotest_common.sh@1366 -- # local bs 00:29:21.508 14:48:29 -- common/autotest_common.sh@1367 -- # local nb 00:29:21.508 14:48:29 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:21.766 14:48:30 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:29:21.766 { 00:29:21.766 "name": "257ca617-2966-460e-8f4c-79ba0ee83db9", 00:29:21.766 "aliases": [ 00:29:21.766 "lvs/nvme0n1p0" 00:29:21.766 ], 00:29:21.766 "product_name": "Logical Volume", 00:29:21.766 "block_size": 4096, 00:29:21.766 "num_blocks": 26476544, 00:29:21.766 "uuid": "257ca617-2966-460e-8f4c-79ba0ee83db9", 00:29:21.766 "assigned_rate_limits": { 00:29:21.766 "rw_ios_per_sec": 0, 00:29:21.766 "rw_mbytes_per_sec": 0, 00:29:21.766 "r_mbytes_per_sec": 0, 00:29:21.766 "w_mbytes_per_sec": 0 00:29:21.766 }, 00:29:21.766 "claimed": false, 00:29:21.766 "zoned": false, 00:29:21.766 "supported_io_types": { 00:29:21.766 "read": true, 00:29:21.766 "write": true, 00:29:21.766 "unmap": true, 00:29:21.766 "write_zeroes": true, 00:29:21.766 "flush": false, 00:29:21.766 "reset": true, 00:29:21.766 "compare": false, 00:29:21.766 "compare_and_write": false, 00:29:21.766 "abort": false, 00:29:21.766 "nvme_admin": false, 00:29:21.766 "nvme_io": false 00:29:21.766 }, 00:29:21.766 "driver_specific": { 00:29:21.766 "lvol": { 00:29:21.766 "lvol_store_uuid": "8dae4429-d930-4c54-bd92-f46b8049ad7c", 00:29:21.766 "base_bdev": "nvme0n1", 00:29:21.766 "thin_provision": true, 00:29:21.766 "snapshot": false, 00:29:21.766 "clone": false, 00:29:21.766 "esnap_clone": false 00:29:21.766 } 00:29:21.766 } 00:29:21.766 } 00:29:21.766 ]' 00:29:21.766 14:48:30 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:29:21.766 14:48:30 -- common/autotest_common.sh@1369 -- # bs=4096 00:29:21.766 14:48:30 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:29:21.766 14:48:30 -- common/autotest_common.sh@1370 -- # nb=26476544 00:29:21.766 14:48:30 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:29:21.766 14:48:30 -- common/autotest_common.sh@1374 -- # echo 103424 00:29:21.766 14:48:30 -- ftl/common.sh@41 -- # local base_size=5171 00:29:21.766 14:48:30 -- ftl/common.sh@44 -- # local nvc_bdev 00:29:21.766 14:48:30 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:22.025 14:48:30 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:22.025 14:48:30 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:22.025 14:48:30 -- ftl/common.sh@48 -- # get_bdev_size 257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:22.025 14:48:30 -- common/autotest_common.sh@1364 -- # local bdev_name=257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:22.025 14:48:30 -- common/autotest_common.sh@1365 -- # local bdev_info 00:29:22.025 14:48:30 -- common/autotest_common.sh@1366 -- # local bs 00:29:22.025 14:48:30 -- common/autotest_common.sh@1367 -- # local nb 00:29:22.025 14:48:30 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:22.591 14:48:30 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:29:22.591 { 00:29:22.591 "name": "257ca617-2966-460e-8f4c-79ba0ee83db9", 00:29:22.591 "aliases": [ 00:29:22.591 "lvs/nvme0n1p0" 00:29:22.591 ], 00:29:22.591 "product_name": "Logical Volume", 00:29:22.591 "block_size": 4096, 00:29:22.591 "num_blocks": 26476544, 00:29:22.591 "uuid": "257ca617-2966-460e-8f4c-79ba0ee83db9", 00:29:22.591 "assigned_rate_limits": { 00:29:22.591 "rw_ios_per_sec": 0, 00:29:22.591 "rw_mbytes_per_sec": 0, 00:29:22.591 "r_mbytes_per_sec": 0, 00:29:22.591 "w_mbytes_per_sec": 0 00:29:22.591 }, 00:29:22.591 "claimed": false, 00:29:22.591 "zoned": false, 00:29:22.591 "supported_io_types": { 00:29:22.591 "read": true, 00:29:22.591 "write": true, 00:29:22.591 "unmap": true, 00:29:22.591 "write_zeroes": true, 00:29:22.591 "flush": false, 00:29:22.591 "reset": true, 00:29:22.591 "compare": false, 00:29:22.591 "compare_and_write": false, 00:29:22.591 "abort": false, 00:29:22.591 "nvme_admin": false, 00:29:22.591 "nvme_io": false 00:29:22.591 }, 00:29:22.591 "driver_specific": { 00:29:22.591 "lvol": { 00:29:22.591 "lvol_store_uuid": "8dae4429-d930-4c54-bd92-f46b8049ad7c", 00:29:22.591 "base_bdev": "nvme0n1", 00:29:22.591 "thin_provision": true, 00:29:22.591 "snapshot": false, 00:29:22.591 "clone": false, 00:29:22.591 "esnap_clone": false 00:29:22.591 } 00:29:22.591 } 00:29:22.591 } 00:29:22.591 ]' 00:29:22.591 14:48:30 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:29:22.591 14:48:30 -- common/autotest_common.sh@1369 -- # bs=4096 00:29:22.591 14:48:30 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:29:22.591 14:48:30 -- common/autotest_common.sh@1370 -- # nb=26476544 00:29:22.591 14:48:30 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:29:22.591 14:48:30 -- common/autotest_common.sh@1374 -- # echo 103424 00:29:22.591 14:48:30 -- ftl/common.sh@48 -- # cache_size=5171 00:29:22.591 14:48:30 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:22.591 14:48:31 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:29:22.591 14:48:31 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:29:22.591 14:48:31 -- ftl/trim.sh@47 -- # get_bdev_size 257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:22.592 14:48:31 -- common/autotest_common.sh@1364 -- # local bdev_name=257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:22.592 14:48:31 -- common/autotest_common.sh@1365 -- # local bdev_info 00:29:22.592 14:48:31 -- common/autotest_common.sh@1366 -- # local bs 00:29:22.592 14:48:31 -- common/autotest_common.sh@1367 -- # local nb 00:29:22.592 14:48:31 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 257ca617-2966-460e-8f4c-79ba0ee83db9 00:29:22.850 14:48:31 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:29:22.850 { 00:29:22.850 "name": "257ca617-2966-460e-8f4c-79ba0ee83db9", 00:29:22.850 "aliases": [ 00:29:22.850 "lvs/nvme0n1p0" 00:29:22.850 ], 00:29:22.850 "product_name": "Logical Volume", 00:29:22.850 "block_size": 4096, 00:29:22.850 "num_blocks": 26476544, 00:29:22.850 "uuid": "257ca617-2966-460e-8f4c-79ba0ee83db9", 00:29:22.850 "assigned_rate_limits": { 00:29:22.850 "rw_ios_per_sec": 0, 00:29:22.850 "rw_mbytes_per_sec": 0, 00:29:22.851 "r_mbytes_per_sec": 0, 00:29:22.851 "w_mbytes_per_sec": 0 00:29:22.851 }, 00:29:22.851 "claimed": false, 00:29:22.851 "zoned": false, 00:29:22.851 "supported_io_types": { 00:29:22.851 "read": true, 00:29:22.851 "write": true, 00:29:22.851 "unmap": true, 00:29:22.851 "write_zeroes": true, 00:29:22.851 "flush": false, 00:29:22.851 "reset": true, 00:29:22.851 "compare": false, 00:29:22.851 "compare_and_write": false, 00:29:22.851 "abort": false, 00:29:22.851 "nvme_admin": false, 00:29:22.851 "nvme_io": false 00:29:22.851 }, 00:29:22.851 "driver_specific": { 00:29:22.851 "lvol": { 00:29:22.851 "lvol_store_uuid": "8dae4429-d930-4c54-bd92-f46b8049ad7c", 00:29:22.851 "base_bdev": "nvme0n1", 00:29:22.851 "thin_provision": true, 00:29:22.851 "snapshot": false, 00:29:22.851 "clone": false, 00:29:22.851 "esnap_clone": false 00:29:22.851 } 00:29:22.851 } 00:29:22.851 } 00:29:22.851 ]' 00:29:22.851 14:48:31 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:29:22.851 14:48:31 -- common/autotest_common.sh@1369 -- # bs=4096 00:29:22.851 14:48:31 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:29:23.110 14:48:31 -- common/autotest_common.sh@1370 -- # nb=26476544 00:29:23.110 14:48:31 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:29:23.110 14:48:31 -- common/autotest_common.sh@1374 -- # echo 103424 00:29:23.110 14:48:31 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:29:23.110 14:48:31 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 257ca617-2966-460e-8f4c-79ba0ee83db9 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:29:23.110 [2024-04-17 14:48:31.704184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.110 [2024-04-17 14:48:31.704510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:23.110 [2024-04-17 14:48:31.704654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:23.110 [2024-04-17 14:48:31.704705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.110 [2024-04-17 14:48:31.709197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.110 [2024-04-17 14:48:31.709432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:23.110 [2024-04-17 14:48:31.709668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.410 ms 00:29:23.110 [2024-04-17 14:48:31.709804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.110 [2024-04-17 14:48:31.710083] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:23.110 [2024-04-17 14:48:31.711856] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:23.110 [2024-04-17 14:48:31.712020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.110 [2024-04-17 14:48:31.712103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:23.110 [2024-04-17 14:48:31.712147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.957 ms 00:29:23.370 [2024-04-17 14:48:31.712241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.370 [2024-04-17 14:48:31.712570] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5b2ea0b9-3b71-4165-8192-0f28abbb9f7b 00:29:23.370 [2024-04-17 14:48:31.714293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.370 [2024-04-17 14:48:31.714442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:23.370 [2024-04-17 14:48:31.714545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:23.370 [2024-04-17 14:48:31.714607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.370 [2024-04-17 14:48:31.722663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.370 [2024-04-17 14:48:31.722924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:23.370 [2024-04-17 14:48:31.723019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.887 ms 00:29:23.370 [2024-04-17 14:48:31.723065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.370 [2024-04-17 14:48:31.723306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.370 [2024-04-17 14:48:31.723360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:23.370 [2024-04-17 14:48:31.723397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:29:23.370 [2024-04-17 14:48:31.723563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.370 [2024-04-17 14:48:31.723652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.370 [2024-04-17 14:48:31.723747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:23.370 [2024-04-17 14:48:31.723815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:23.370 [2024-04-17 14:48:31.723856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.370 [2024-04-17 14:48:31.723922] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:23.370 [2024-04-17 14:48:31.730978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.370 [2024-04-17 14:48:31.731204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:23.370 [2024-04-17 14:48:31.731300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.057 ms 00:29:23.370 [2024-04-17 14:48:31.731343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.370 [2024-04-17 14:48:31.731510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.370 [2024-04-17 14:48:31.731633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:23.370 [2024-04-17 14:48:31.731712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:23.370 [2024-04-17 14:48:31.731775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.370 [2024-04-17 14:48:31.731846] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:23.370 [2024-04-17 14:48:31.731994] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:29:23.370 [2024-04-17 14:48:31.732139] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:23.370 [2024-04-17 14:48:31.732196] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:29:23.370 [2024-04-17 14:48:31.732260] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:23.370 [2024-04-17 14:48:31.732318] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:23.370 [2024-04-17 14:48:31.732379] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:23.370 [2024-04-17 14:48:31.732498] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:23.370 [2024-04-17 14:48:31.732572] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:29:23.370 [2024-04-17 14:48:31.732607] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:29:23.370 [2024-04-17 14:48:31.732646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.370 [2024-04-17 14:48:31.732681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:23.370 [2024-04-17 14:48:31.732718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:29:23.370 [2024-04-17 14:48:31.732753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.370 [2024-04-17 14:48:31.732861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.370 [2024-04-17 14:48:31.732964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:23.370 [2024-04-17 14:48:31.733024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:23.370 [2024-04-17 14:48:31.733059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.370 [2024-04-17 14:48:31.733206] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:23.370 [2024-04-17 14:48:31.733246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:23.370 [2024-04-17 14:48:31.733283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:23.370 [2024-04-17 14:48:31.733379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:23.370 [2024-04-17 14:48:31.733441] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:23.370 [2024-04-17 14:48:31.733475] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:23.370 [2024-04-17 14:48:31.733525] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:23.370 [2024-04-17 14:48:31.733561] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:23.371 [2024-04-17 14:48:31.733598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:23.371 [2024-04-17 14:48:31.733632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:23.371 [2024-04-17 14:48:31.733668] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:23.371 [2024-04-17 14:48:31.733701] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:23.371 [2024-04-17 14:48:31.733819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:23.371 [2024-04-17 14:48:31.733861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:23.371 [2024-04-17 14:48:31.733898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:29:23.371 [2024-04-17 14:48:31.733932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:23.371 [2024-04-17 14:48:31.733970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:23.371 [2024-04-17 14:48:31.734004] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:29:23.371 [2024-04-17 14:48:31.734133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:23.371 [2024-04-17 14:48:31.734167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:29:23.371 [2024-04-17 14:48:31.734206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:29:23.371 [2024-04-17 14:48:31.734240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:29:23.371 [2024-04-17 14:48:31.734327] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:23.371 [2024-04-17 14:48:31.734385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:23.371 [2024-04-17 14:48:31.734423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:23.371 [2024-04-17 14:48:31.734457] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:23.371 [2024-04-17 14:48:31.734604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:29:23.371 [2024-04-17 14:48:31.734655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:23.371 [2024-04-17 14:48:31.734695] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:23.371 [2024-04-17 14:48:31.734729] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:23.371 [2024-04-17 14:48:31.734764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:23.371 [2024-04-17 14:48:31.734798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:23.371 [2024-04-17 14:48:31.734835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:29:23.371 [2024-04-17 14:48:31.734868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:23.371 [2024-04-17 14:48:31.734904] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:23.371 [2024-04-17 14:48:31.734993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:23.371 [2024-04-17 14:48:31.735092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:23.371 [2024-04-17 14:48:31.735132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:23.371 [2024-04-17 14:48:31.735203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:29:23.371 [2024-04-17 14:48:31.735241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:23.371 [2024-04-17 14:48:31.735277] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:23.371 [2024-04-17 14:48:31.735352] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:23.371 [2024-04-17 14:48:31.735401] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:23.371 [2024-04-17 14:48:31.735436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:23.371 [2024-04-17 14:48:31.735473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:23.371 [2024-04-17 14:48:31.735553] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:23.371 [2024-04-17 14:48:31.735595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:23.371 [2024-04-17 14:48:31.735630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:23.371 [2024-04-17 14:48:31.735714] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:23.371 [2024-04-17 14:48:31.735754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:23.371 [2024-04-17 14:48:31.735801] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:23.371 [2024-04-17 14:48:31.735908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:23.371 [2024-04-17 14:48:31.736005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:23.371 [2024-04-17 14:48:31.736062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:29:23.371 [2024-04-17 14:48:31.736166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:29:23.371 [2024-04-17 14:48:31.736219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:29:23.371 [2024-04-17 14:48:31.736273] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:29:23.371 [2024-04-17 14:48:31.736441] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:29:23.371 [2024-04-17 14:48:31.736505] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:29:23.371 [2024-04-17 14:48:31.736560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:29:23.371 [2024-04-17 14:48:31.736617] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:29:23.371 [2024-04-17 14:48:31.736668] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:29:23.371 [2024-04-17 14:48:31.736721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:29:23.371 [2024-04-17 14:48:31.736896] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:29:23.371 [2024-04-17 14:48:31.736951] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:29:23.371 [2024-04-17 14:48:31.737002] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:23.371 [2024-04-17 14:48:31.737058] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:23.371 [2024-04-17 14:48:31.737110] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:23.371 [2024-04-17 14:48:31.737228] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:23.371 [2024-04-17 14:48:31.737287] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:23.371 [2024-04-17 14:48:31.737341] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:23.371 [2024-04-17 14:48:31.737395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.371 [2024-04-17 14:48:31.737434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:23.371 [2024-04-17 14:48:31.737550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.233 ms 00:29:23.371 [2024-04-17 14:48:31.737598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.371 [2024-04-17 14:48:31.765191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.371 [2024-04-17 14:48:31.765452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:23.371 [2024-04-17 14:48:31.765615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.420 ms 00:29:23.371 [2024-04-17 14:48:31.765665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.371 [2024-04-17 14:48:31.765875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.371 [2024-04-17 14:48:31.765955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:23.371 [2024-04-17 14:48:31.766039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:23.372 [2024-04-17 14:48:31.766076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.372 [2024-04-17 14:48:31.826878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.372 [2024-04-17 14:48:31.827167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:23.372 [2024-04-17 14:48:31.827267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.733 ms 00:29:23.372 [2024-04-17 14:48:31.827313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.372 [2024-04-17 14:48:31.827485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.372 [2024-04-17 14:48:31.827660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:23.372 [2024-04-17 14:48:31.827758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:23.372 [2024-04-17 14:48:31.827801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.372 [2024-04-17 14:48:31.828313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.372 [2024-04-17 14:48:31.828373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:23.372 [2024-04-17 14:48:31.828412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:29:23.372 [2024-04-17 14:48:31.828449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.372 [2024-04-17 14:48:31.828725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.372 [2024-04-17 14:48:31.828783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:23.372 [2024-04-17 14:48:31.828823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:29:23.372 [2024-04-17 14:48:31.828950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.372 [2024-04-17 14:48:31.867492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.372 [2024-04-17 14:48:31.867768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:23.372 [2024-04-17 14:48:31.867866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.475 ms 00:29:23.372 [2024-04-17 14:48:31.867911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.372 [2024-04-17 14:48:31.885532] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:23.372 [2024-04-17 14:48:31.904122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.372 [2024-04-17 14:48:31.904539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:23.372 [2024-04-17 14:48:31.904653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.957 ms 00:29:23.372 [2024-04-17 14:48:31.904701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.630 [2024-04-17 14:48:31.993900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.630 [2024-04-17 14:48:31.994176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:23.630 [2024-04-17 14:48:31.994287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.008 ms 00:29:23.630 [2024-04-17 14:48:31.994329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.630 [2024-04-17 14:48:31.994540] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:29:23.630 [2024-04-17 14:48:31.994809] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:29:26.170 [2024-04-17 14:48:34.432130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.170 [2024-04-17 14:48:34.432387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:26.170 [2024-04-17 14:48:34.432536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2437.571 ms 00:29:26.170 [2024-04-17 14:48:34.432584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.170 [2024-04-17 14:48:34.432926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.170 [2024-04-17 14:48:34.432976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:26.171 [2024-04-17 14:48:34.433073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:29:26.171 [2024-04-17 14:48:34.433113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.171 [2024-04-17 14:48:34.479407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.171 [2024-04-17 14:48:34.479690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:26.171 [2024-04-17 14:48:34.479836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.204 ms 00:29:26.171 [2024-04-17 14:48:34.479885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.171 [2024-04-17 14:48:34.523973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.171 [2024-04-17 14:48:34.524236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:26.171 [2024-04-17 14:48:34.524366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.901 ms 00:29:26.171 [2024-04-17 14:48:34.524402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.171 [2024-04-17 14:48:34.525010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.171 [2024-04-17 14:48:34.525143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:26.171 [2024-04-17 14:48:34.525254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:29:26.171 [2024-04-17 14:48:34.525293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.171 [2024-04-17 14:48:34.633267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.171 [2024-04-17 14:48:34.633567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:26.171 [2024-04-17 14:48:34.633692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.882 ms 00:29:26.171 [2024-04-17 14:48:34.633733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.171 [2024-04-17 14:48:34.679715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.171 [2024-04-17 14:48:34.679963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:26.171 [2024-04-17 14:48:34.680128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.780 ms 00:29:26.171 [2024-04-17 14:48:34.680169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.171 [2024-04-17 14:48:34.685822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.171 [2024-04-17 14:48:34.686020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:29:26.171 [2024-04-17 14:48:34.686178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.494 ms 00:29:26.171 [2024-04-17 14:48:34.686220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.171 [2024-04-17 14:48:34.733762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.171 [2024-04-17 14:48:34.734054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:26.171 [2024-04-17 14:48:34.734159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.413 ms 00:29:26.171 [2024-04-17 14:48:34.734200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.171 [2024-04-17 14:48:34.734404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.171 [2024-04-17 14:48:34.734601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:26.171 [2024-04-17 14:48:34.734651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:26.171 [2024-04-17 14:48:34.734686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.171 [2024-04-17 14:48:34.734824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:26.171 [2024-04-17 14:48:34.734870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:26.171 [2024-04-17 14:48:34.734910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:26.171 [2024-04-17 14:48:34.735041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:26.171 [2024-04-17 14:48:34.736231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:26.171 [2024-04-17 14:48:34.742774] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3031.734 ms, result 0 00:29:26.171 [2024-04-17 14:48:34.743872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:26.171 { 00:29:26.171 "name": "ftl0", 00:29:26.171 "uuid": "5b2ea0b9-3b71-4165-8192-0f28abbb9f7b" 00:29:26.171 } 00:29:26.171 14:48:34 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:29:26.171 14:48:34 -- common/autotest_common.sh@885 -- # local bdev_name=ftl0 00:29:26.171 14:48:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:29:26.171 14:48:34 -- common/autotest_common.sh@887 -- # local i 00:29:26.171 14:48:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:29:26.171 14:48:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:29:26.171 14:48:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:26.738 14:48:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:29:26.738 [ 00:29:26.738 { 00:29:26.738 "name": "ftl0", 00:29:26.738 "aliases": [ 00:29:26.738 "5b2ea0b9-3b71-4165-8192-0f28abbb9f7b" 00:29:26.738 ], 00:29:26.738 "product_name": "FTL disk", 00:29:26.738 "block_size": 4096, 00:29:26.738 "num_blocks": 23592960, 00:29:26.738 "uuid": "5b2ea0b9-3b71-4165-8192-0f28abbb9f7b", 00:29:26.738 "assigned_rate_limits": { 00:29:26.738 "rw_ios_per_sec": 0, 00:29:26.738 "rw_mbytes_per_sec": 0, 00:29:26.738 "r_mbytes_per_sec": 0, 00:29:26.738 "w_mbytes_per_sec": 0 00:29:26.738 }, 00:29:26.738 "claimed": false, 00:29:26.738 "zoned": false, 00:29:26.738 "supported_io_types": { 00:29:26.738 "read": true, 00:29:26.738 "write": true, 00:29:26.738 "unmap": true, 00:29:26.738 "write_zeroes": true, 00:29:26.738 "flush": true, 00:29:26.738 "reset": false, 00:29:26.738 "compare": false, 00:29:26.738 "compare_and_write": false, 00:29:26.738 "abort": false, 00:29:26.738 "nvme_admin": false, 00:29:26.738 "nvme_io": false 00:29:26.738 }, 00:29:26.738 "driver_specific": { 00:29:26.738 "ftl": { 00:29:26.738 "base_bdev": "257ca617-2966-460e-8f4c-79ba0ee83db9", 00:29:26.738 "cache": "nvc0n1p0" 00:29:26.738 } 00:29:26.738 } 00:29:26.738 } 00:29:26.738 ] 00:29:26.738 14:48:35 -- common/autotest_common.sh@893 -- # return 0 00:29:26.738 14:48:35 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:29:26.738 14:48:35 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:27.043 14:48:35 -- ftl/trim.sh@56 -- # echo ']}' 00:29:27.043 14:48:35 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:29:27.338 14:48:35 -- ftl/trim.sh@59 -- # bdev_info='[ 00:29:27.338 { 00:29:27.338 "name": "ftl0", 00:29:27.338 "aliases": [ 00:29:27.338 "5b2ea0b9-3b71-4165-8192-0f28abbb9f7b" 00:29:27.338 ], 00:29:27.338 "product_name": "FTL disk", 00:29:27.338 "block_size": 4096, 00:29:27.338 "num_blocks": 23592960, 00:29:27.338 "uuid": "5b2ea0b9-3b71-4165-8192-0f28abbb9f7b", 00:29:27.338 "assigned_rate_limits": { 00:29:27.338 "rw_ios_per_sec": 0, 00:29:27.338 "rw_mbytes_per_sec": 0, 00:29:27.338 "r_mbytes_per_sec": 0, 00:29:27.338 "w_mbytes_per_sec": 0 00:29:27.338 }, 00:29:27.338 "claimed": false, 00:29:27.338 "zoned": false, 00:29:27.338 "supported_io_types": { 00:29:27.338 "read": true, 00:29:27.338 "write": true, 00:29:27.338 "unmap": true, 00:29:27.338 "write_zeroes": true, 00:29:27.338 "flush": true, 00:29:27.338 "reset": false, 00:29:27.338 "compare": false, 00:29:27.338 "compare_and_write": false, 00:29:27.338 "abort": false, 00:29:27.338 "nvme_admin": false, 00:29:27.338 "nvme_io": false 00:29:27.338 }, 00:29:27.338 "driver_specific": { 00:29:27.338 "ftl": { 00:29:27.338 "base_bdev": "257ca617-2966-460e-8f4c-79ba0ee83db9", 00:29:27.338 "cache": "nvc0n1p0" 00:29:27.338 } 00:29:27.338 } 00:29:27.338 } 00:29:27.338 ]' 00:29:27.338 14:48:35 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:29:27.338 14:48:35 -- ftl/trim.sh@60 -- # nb=23592960 00:29:27.338 14:48:35 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:27.338 [2024-04-17 14:48:35.925202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.338 [2024-04-17 14:48:35.925272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:27.338 [2024-04-17 14:48:35.925291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:27.338 [2024-04-17 14:48:35.925311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.338 [2024-04-17 14:48:35.925366] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:27.338 [2024-04-17 14:48:35.929606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.338 [2024-04-17 14:48:35.929664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:27.338 [2024-04-17 14:48:35.929684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.211 ms 00:29:27.338 [2024-04-17 14:48:35.929697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.338 [2024-04-17 14:48:35.930516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.338 [2024-04-17 14:48:35.930547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:27.338 [2024-04-17 14:48:35.930564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:29:27.338 [2024-04-17 14:48:35.930577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.338 [2024-04-17 14:48:35.934041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.338 [2024-04-17 14:48:35.934072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:27.338 [2024-04-17 14:48:35.934093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.412 ms 00:29:27.338 [2024-04-17 14:48:35.934106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.598 [2024-04-17 14:48:35.941098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.598 [2024-04-17 14:48:35.941160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:29:27.598 [2024-04-17 14:48:35.941184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.897 ms 00:29:27.598 [2024-04-17 14:48:35.941197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.598 [2024-04-17 14:48:35.989192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.598 [2024-04-17 14:48:35.989275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:27.598 [2024-04-17 14:48:35.989297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.837 ms 00:29:27.598 [2024-04-17 14:48:35.989310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.598 [2024-04-17 14:48:36.017795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.598 [2024-04-17 14:48:36.017876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:27.598 [2024-04-17 14:48:36.017899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.314 ms 00:29:27.598 [2024-04-17 14:48:36.017912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.598 [2024-04-17 14:48:36.018238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.598 [2024-04-17 14:48:36.018256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:27.598 [2024-04-17 14:48:36.018272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:29:27.598 [2024-04-17 14:48:36.018283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.598 [2024-04-17 14:48:36.064978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.598 [2024-04-17 14:48:36.065045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:27.598 [2024-04-17 14:48:36.065070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.637 ms 00:29:27.598 [2024-04-17 14:48:36.065081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.598 [2024-04-17 14:48:36.112938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.598 [2024-04-17 14:48:36.113019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:27.598 [2024-04-17 14:48:36.113057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.689 ms 00:29:27.598 [2024-04-17 14:48:36.113069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.598 [2024-04-17 14:48:36.160129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.598 [2024-04-17 14:48:36.160229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:27.598 [2024-04-17 14:48:36.160253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.875 ms 00:29:27.598 [2024-04-17 14:48:36.160265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.858 [2024-04-17 14:48:36.207403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.858 [2024-04-17 14:48:36.207480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:27.858 [2024-04-17 14:48:36.207517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.867 ms 00:29:27.858 [2024-04-17 14:48:36.207530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.858 [2024-04-17 14:48:36.207693] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:27.858 [2024-04-17 14:48:36.207717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.207983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:27.858 [2024-04-17 14:48:36.208579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.208993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:27.859 [2024-04-17 14:48:36.209212] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:27.859 [2024-04-17 14:48:36.209226] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b2ea0b9-3b71-4165-8192-0f28abbb9f7b 00:29:27.859 [2024-04-17 14:48:36.209240] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:27.859 [2024-04-17 14:48:36.209254] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:27.859 [2024-04-17 14:48:36.209265] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:27.859 [2024-04-17 14:48:36.209280] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:27.859 [2024-04-17 14:48:36.209292] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:27.859 [2024-04-17 14:48:36.209310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:27.859 [2024-04-17 14:48:36.209321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:27.859 [2024-04-17 14:48:36.209335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:27.859 [2024-04-17 14:48:36.209345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:27.859 [2024-04-17 14:48:36.209361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.859 [2024-04-17 14:48:36.209375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:27.859 [2024-04-17 14:48:36.209393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:29:27.859 [2024-04-17 14:48:36.209405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.859 [2024-04-17 14:48:36.233688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.859 [2024-04-17 14:48:36.233767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:27.859 [2024-04-17 14:48:36.233788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.223 ms 00:29:27.859 [2024-04-17 14:48:36.233822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.859 [2024-04-17 14:48:36.234247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.859 [2024-04-17 14:48:36.234269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:27.859 [2024-04-17 14:48:36.234285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:29:27.859 [2024-04-17 14:48:36.234297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.859 [2024-04-17 14:48:36.317029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.859 [2024-04-17 14:48:36.317094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:27.859 [2024-04-17 14:48:36.317140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.859 [2024-04-17 14:48:36.317152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.859 [2024-04-17 14:48:36.317304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.859 [2024-04-17 14:48:36.317318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:27.859 [2024-04-17 14:48:36.317337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.859 [2024-04-17 14:48:36.317348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.859 [2024-04-17 14:48:36.317439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.859 [2024-04-17 14:48:36.317454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:27.859 [2024-04-17 14:48:36.317469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.859 [2024-04-17 14:48:36.317480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.859 [2024-04-17 14:48:36.317558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.859 [2024-04-17 14:48:36.317571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:27.860 [2024-04-17 14:48:36.317586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.860 [2024-04-17 14:48:36.317598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.118 [2024-04-17 14:48:36.479914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.118 [2024-04-17 14:48:36.479999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:28.118 [2024-04-17 14:48:36.480020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.118 [2024-04-17 14:48:36.480037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.118 [2024-04-17 14:48:36.535693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.118 [2024-04-17 14:48:36.535768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:28.118 [2024-04-17 14:48:36.535792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.118 [2024-04-17 14:48:36.535805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.118 [2024-04-17 14:48:36.535917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.118 [2024-04-17 14:48:36.535933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:28.118 [2024-04-17 14:48:36.535948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.118 [2024-04-17 14:48:36.535961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.118 [2024-04-17 14:48:36.536031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.118 [2024-04-17 14:48:36.536043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:28.118 [2024-04-17 14:48:36.536058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.118 [2024-04-17 14:48:36.536070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.118 [2024-04-17 14:48:36.536230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.118 [2024-04-17 14:48:36.536246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:28.118 [2024-04-17 14:48:36.536280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.118 [2024-04-17 14:48:36.536293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.118 [2024-04-17 14:48:36.536367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.118 [2024-04-17 14:48:36.536384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:28.118 [2024-04-17 14:48:36.536402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.118 [2024-04-17 14:48:36.536415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.118 [2024-04-17 14:48:36.536477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.118 [2024-04-17 14:48:36.536512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:28.118 [2024-04-17 14:48:36.536529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.118 [2024-04-17 14:48:36.536541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.118 [2024-04-17 14:48:36.536614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.118 [2024-04-17 14:48:36.536627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:28.118 [2024-04-17 14:48:36.536642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.118 [2024-04-17 14:48:36.536654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.118 [2024-04-17 14:48:36.536872] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 611.649 ms, result 0 00:29:28.118 true 00:29:28.118 14:48:36 -- ftl/trim.sh@63 -- # killprocess 78924 00:29:28.119 14:48:36 -- common/autotest_common.sh@936 -- # '[' -z 78924 ']' 00:29:28.119 14:48:36 -- common/autotest_common.sh@940 -- # kill -0 78924 00:29:28.119 14:48:36 -- common/autotest_common.sh@941 -- # uname 00:29:28.119 14:48:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:28.119 14:48:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78924 00:29:28.119 killing process with pid 78924 00:29:28.119 14:48:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:28.119 14:48:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:28.119 14:48:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78924' 00:29:28.119 14:48:36 -- common/autotest_common.sh@955 -- # kill 78924 00:29:28.119 14:48:36 -- common/autotest_common.sh@960 -- # wait 78924 00:29:34.682 14:48:42 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:29:34.682 65536+0 records in 00:29:34.682 65536+0 records out 00:29:34.682 268435456 bytes (268 MB, 256 MiB) copied, 1.09619 s, 245 MB/s 00:29:34.682 14:48:43 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:34.682 [2024-04-17 14:48:43.272718] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:29:34.682 [2024-04-17 14:48:43.272887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79129 ] 00:29:34.941 [2024-04-17 14:48:43.457949] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.200 [2024-04-17 14:48:43.718766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.804 [2024-04-17 14:48:44.182271] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:35.804 [2024-04-17 14:48:44.182363] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:35.804 [2024-04-17 14:48:44.345738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.345809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:35.804 [2024-04-17 14:48:44.345830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:35.804 [2024-04-17 14:48:44.345841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.349535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.349585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:35.804 [2024-04-17 14:48:44.349600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.653 ms 00:29:35.804 [2024-04-17 14:48:44.349616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.349770] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:35.804 [2024-04-17 14:48:44.351148] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:35.804 [2024-04-17 14:48:44.351187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.351205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:35.804 [2024-04-17 14:48:44.351219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:29:35.804 [2024-04-17 14:48:44.351231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.352957] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:35.804 [2024-04-17 14:48:44.376690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.376749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:35.804 [2024-04-17 14:48:44.376768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.732 ms 00:29:35.804 [2024-04-17 14:48:44.376780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.376930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.376946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:35.804 [2024-04-17 14:48:44.376963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:35.804 [2024-04-17 14:48:44.376975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.384724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.384767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:35.804 [2024-04-17 14:48:44.384782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.692 ms 00:29:35.804 [2024-04-17 14:48:44.384794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.384956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.384989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:35.804 [2024-04-17 14:48:44.385001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:29:35.804 [2024-04-17 14:48:44.385012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.385045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.385057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:35.804 [2024-04-17 14:48:44.385068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:35.804 [2024-04-17 14:48:44.385079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.385107] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:35.804 [2024-04-17 14:48:44.391684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.391733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:35.804 [2024-04-17 14:48:44.391765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.584 ms 00:29:35.804 [2024-04-17 14:48:44.391777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.391876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.391890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:35.804 [2024-04-17 14:48:44.391903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:35.804 [2024-04-17 14:48:44.391914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.391943] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:35.804 [2024-04-17 14:48:44.391969] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:29:35.804 [2024-04-17 14:48:44.392009] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:35.804 [2024-04-17 14:48:44.392033] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:29:35.804 [2024-04-17 14:48:44.392112] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:29:35.804 [2024-04-17 14:48:44.392127] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:35.804 [2024-04-17 14:48:44.392142] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:29:35.804 [2024-04-17 14:48:44.392157] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:35.804 [2024-04-17 14:48:44.392172] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:35.804 [2024-04-17 14:48:44.392185] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:35.804 [2024-04-17 14:48:44.392196] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:35.804 [2024-04-17 14:48:44.392208] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:29:35.804 [2024-04-17 14:48:44.392220] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:29:35.804 [2024-04-17 14:48:44.392231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.392263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:35.804 [2024-04-17 14:48:44.392279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:29:35.804 [2024-04-17 14:48:44.392291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.392365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.804 [2024-04-17 14:48:44.392379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:35.804 [2024-04-17 14:48:44.392391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:35.804 [2024-04-17 14:48:44.392403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.804 [2024-04-17 14:48:44.392487] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:35.804 [2024-04-17 14:48:44.392501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:35.804 [2024-04-17 14:48:44.392517] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:35.804 [2024-04-17 14:48:44.392543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.804 [2024-04-17 14:48:44.392555] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:35.804 [2024-04-17 14:48:44.392567] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:35.804 [2024-04-17 14:48:44.392579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:35.804 [2024-04-17 14:48:44.392590] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:35.804 [2024-04-17 14:48:44.392602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:35.804 [2024-04-17 14:48:44.392614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:35.804 [2024-04-17 14:48:44.392637] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:35.804 [2024-04-17 14:48:44.392649] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:35.804 [2024-04-17 14:48:44.392662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:35.804 [2024-04-17 14:48:44.392673] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:35.804 [2024-04-17 14:48:44.392684] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:29:35.804 [2024-04-17 14:48:44.392695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.804 [2024-04-17 14:48:44.392707] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:35.804 [2024-04-17 14:48:44.392718] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:29:35.804 [2024-04-17 14:48:44.392730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.804 [2024-04-17 14:48:44.392741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:29:35.804 [2024-04-17 14:48:44.392752] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:29:35.804 [2024-04-17 14:48:44.392764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:29:35.804 [2024-04-17 14:48:44.392775] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:35.804 [2024-04-17 14:48:44.392786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:35.804 [2024-04-17 14:48:44.392797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:35.804 [2024-04-17 14:48:44.392808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:35.804 [2024-04-17 14:48:44.392820] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:29:35.804 [2024-04-17 14:48:44.392831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:35.804 [2024-04-17 14:48:44.392842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:35.804 [2024-04-17 14:48:44.392853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:35.804 [2024-04-17 14:48:44.392864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:35.804 [2024-04-17 14:48:44.392875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:35.804 [2024-04-17 14:48:44.392886] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:29:35.804 [2024-04-17 14:48:44.392897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:35.804 [2024-04-17 14:48:44.392908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:35.804 [2024-04-17 14:48:44.392920] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:35.804 [2024-04-17 14:48:44.392930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:35.804 [2024-04-17 14:48:44.392941] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:35.805 [2024-04-17 14:48:44.392953] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:29:35.805 [2024-04-17 14:48:44.392963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:35.805 [2024-04-17 14:48:44.392974] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:35.805 [2024-04-17 14:48:44.392986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:35.805 [2024-04-17 14:48:44.393009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:35.805 [2024-04-17 14:48:44.393020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.805 [2024-04-17 14:48:44.393032] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:35.805 [2024-04-17 14:48:44.393043] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:35.805 [2024-04-17 14:48:44.393054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:35.805 [2024-04-17 14:48:44.393066] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:35.805 [2024-04-17 14:48:44.393076] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:35.805 [2024-04-17 14:48:44.393105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:35.805 [2024-04-17 14:48:44.393117] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:35.805 [2024-04-17 14:48:44.393131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.805 [2024-04-17 14:48:44.393145] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:35.805 [2024-04-17 14:48:44.393157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:29:35.805 [2024-04-17 14:48:44.393170] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:29:35.805 [2024-04-17 14:48:44.393182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:29:35.805 [2024-04-17 14:48:44.393194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:29:35.805 [2024-04-17 14:48:44.393206] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:29:35.805 [2024-04-17 14:48:44.393219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:29:35.805 [2024-04-17 14:48:44.393231] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:29:35.805 [2024-04-17 14:48:44.393243] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:29:35.805 [2024-04-17 14:48:44.393256] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:29:35.805 [2024-04-17 14:48:44.393269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:29:35.805 [2024-04-17 14:48:44.393282] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:29:35.805 [2024-04-17 14:48:44.393295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:29:35.805 [2024-04-17 14:48:44.393307] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:35.805 [2024-04-17 14:48:44.393320] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.805 [2024-04-17 14:48:44.393333] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:35.805 [2024-04-17 14:48:44.393346] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:35.805 [2024-04-17 14:48:44.393358] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:35.805 [2024-04-17 14:48:44.393374] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:35.805 [2024-04-17 14:48:44.393387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.805 [2024-04-17 14:48:44.393405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:35.805 [2024-04-17 14:48:44.393417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:29:35.805 [2024-04-17 14:48:44.393429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.063 [2024-04-17 14:48:44.422017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.063 [2024-04-17 14:48:44.422080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:36.063 [2024-04-17 14:48:44.422099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.531 ms 00:29:36.063 [2024-04-17 14:48:44.422111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.063 [2024-04-17 14:48:44.422284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.063 [2024-04-17 14:48:44.422298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:36.063 [2024-04-17 14:48:44.422312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:36.063 [2024-04-17 14:48:44.422323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.063 [2024-04-17 14:48:44.501211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.063 [2024-04-17 14:48:44.501266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:36.063 [2024-04-17 14:48:44.501284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.849 ms 00:29:36.063 [2024-04-17 14:48:44.501296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.063 [2024-04-17 14:48:44.501406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.063 [2024-04-17 14:48:44.501420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:36.063 [2024-04-17 14:48:44.501432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:36.063 [2024-04-17 14:48:44.501443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.063 [2024-04-17 14:48:44.501947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.063 [2024-04-17 14:48:44.501974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:36.063 [2024-04-17 14:48:44.501987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:29:36.063 [2024-04-17 14:48:44.501999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.063 [2024-04-17 14:48:44.502132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.063 [2024-04-17 14:48:44.502147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:36.063 [2024-04-17 14:48:44.502160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:29:36.063 [2024-04-17 14:48:44.502171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.063 [2024-04-17 14:48:44.529195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.063 [2024-04-17 14:48:44.529254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:36.063 [2024-04-17 14:48:44.529273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.994 ms 00:29:36.063 [2024-04-17 14:48:44.529286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.063 [2024-04-17 14:48:44.553595] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:36.063 [2024-04-17 14:48:44.553686] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:36.063 [2024-04-17 14:48:44.553712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.063 [2024-04-17 14:48:44.553724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:36.063 [2024-04-17 14:48:44.553739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.250 ms 00:29:36.063 [2024-04-17 14:48:44.553750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.063 [2024-04-17 14:48:44.590751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.063 [2024-04-17 14:48:44.590825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:36.063 [2024-04-17 14:48:44.590846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.825 ms 00:29:36.064 [2024-04-17 14:48:44.590870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.064 [2024-04-17 14:48:44.614606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.064 [2024-04-17 14:48:44.614680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:36.064 [2024-04-17 14:48:44.614699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.572 ms 00:29:36.064 [2024-04-17 14:48:44.614711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.064 [2024-04-17 14:48:44.639065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.064 [2024-04-17 14:48:44.639138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:36.064 [2024-04-17 14:48:44.639157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.181 ms 00:29:36.064 [2024-04-17 14:48:44.639169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.064 [2024-04-17 14:48:44.639831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.064 [2024-04-17 14:48:44.639861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:36.064 [2024-04-17 14:48:44.639876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:29:36.064 [2024-04-17 14:48:44.639888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.322 [2024-04-17 14:48:44.748220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.322 [2024-04-17 14:48:44.748297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:36.322 [2024-04-17 14:48:44.748318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.298 ms 00:29:36.322 [2024-04-17 14:48:44.748330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.322 [2024-04-17 14:48:44.765211] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:36.322 [2024-04-17 14:48:44.783561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.323 [2024-04-17 14:48:44.783631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:36.323 [2024-04-17 14:48:44.783650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.067 ms 00:29:36.323 [2024-04-17 14:48:44.783662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.323 [2024-04-17 14:48:44.783784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.323 [2024-04-17 14:48:44.783799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:36.323 [2024-04-17 14:48:44.783812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:36.323 [2024-04-17 14:48:44.783824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.323 [2024-04-17 14:48:44.783881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.323 [2024-04-17 14:48:44.783895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:36.323 [2024-04-17 14:48:44.783912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:36.323 [2024-04-17 14:48:44.783923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.323 [2024-04-17 14:48:44.786331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.323 [2024-04-17 14:48:44.786397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:29:36.323 [2024-04-17 14:48:44.786411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.384 ms 00:29:36.323 [2024-04-17 14:48:44.786425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.323 [2024-04-17 14:48:44.786496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.323 [2024-04-17 14:48:44.786532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:36.323 [2024-04-17 14:48:44.786545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:36.323 [2024-04-17 14:48:44.786557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.323 [2024-04-17 14:48:44.786606] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:36.323 [2024-04-17 14:48:44.786621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.323 [2024-04-17 14:48:44.786633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:36.323 [2024-04-17 14:48:44.786646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:36.323 [2024-04-17 14:48:44.786657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.323 [2024-04-17 14:48:44.833000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.323 [2024-04-17 14:48:44.833077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:36.323 [2024-04-17 14:48:44.833109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.313 ms 00:29:36.323 [2024-04-17 14:48:44.833121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.323 [2024-04-17 14:48:44.833300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.323 [2024-04-17 14:48:44.833314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:36.323 [2024-04-17 14:48:44.833327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:36.323 [2024-04-17 14:48:44.833337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.323 [2024-04-17 14:48:44.834425] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:36.323 [2024-04-17 14:48:44.841110] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 488.330 ms, result 0 00:29:36.323 [2024-04-17 14:48:44.842100] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:36.323 [2024-04-17 14:48:44.866007] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:44.686  Copying: 32/256 [MB] (32 MBps) Copying: 62/256 [MB] (30 MBps) Copying: 94/256 [MB] (31 MBps) Copying: 125/256 [MB] (31 MBps) Copying: 157/256 [MB] (31 MBps) Copying: 187/256 [MB] (30 MBps) Copying: 217/256 [MB] (30 MBps) Copying: 249/256 [MB] (31 MBps) Copying: 256/256 [MB] (average 31 MBps)[2024-04-17 14:48:53.089466] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:44.686 [2024-04-17 14:48:53.107322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.686 [2024-04-17 14:48:53.107377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:44.686 [2024-04-17 14:48:53.107394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:44.686 [2024-04-17 14:48:53.107407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.686 [2024-04-17 14:48:53.107436] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:44.686 [2024-04-17 14:48:53.111800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.686 [2024-04-17 14:48:53.111838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:44.686 [2024-04-17 14:48:53.111862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.343 ms 00:29:44.686 [2024-04-17 14:48:53.111875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.686 [2024-04-17 14:48:53.113472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.686 [2024-04-17 14:48:53.113528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:44.686 [2024-04-17 14:48:53.113543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:29:44.686 [2024-04-17 14:48:53.113555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.686 [2024-04-17 14:48:53.120179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.686 [2024-04-17 14:48:53.120229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:44.686 [2024-04-17 14:48:53.120244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.600 ms 00:29:44.686 [2024-04-17 14:48:53.120256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.686 [2024-04-17 14:48:53.127349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.686 [2024-04-17 14:48:53.127393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:29:44.686 [2024-04-17 14:48:53.127408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.008 ms 00:29:44.686 [2024-04-17 14:48:53.127421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.686 [2024-04-17 14:48:53.170580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.686 [2024-04-17 14:48:53.170641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:44.686 [2024-04-17 14:48:53.170658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.093 ms 00:29:44.686 [2024-04-17 14:48:53.170670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.686 [2024-04-17 14:48:53.195692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.686 [2024-04-17 14:48:53.195771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:44.686 [2024-04-17 14:48:53.195788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.919 ms 00:29:44.686 [2024-04-17 14:48:53.195800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.686 [2024-04-17 14:48:53.196023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.686 [2024-04-17 14:48:53.196050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:44.686 [2024-04-17 14:48:53.196075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:44.686 [2024-04-17 14:48:53.196086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.686 [2024-04-17 14:48:53.237192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.686 [2024-04-17 14:48:53.237245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:44.686 [2024-04-17 14:48:53.237260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.084 ms 00:29:44.686 [2024-04-17 14:48:53.237286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.686 [2024-04-17 14:48:53.278652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.686 [2024-04-17 14:48:53.278735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:44.686 [2024-04-17 14:48:53.278753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.277 ms 00:29:44.686 [2024-04-17 14:48:53.278766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.954 [2024-04-17 14:48:53.320214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.954 [2024-04-17 14:48:53.320274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:44.954 [2024-04-17 14:48:53.320290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.319 ms 00:29:44.954 [2024-04-17 14:48:53.320316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.954 [2024-04-17 14:48:53.361801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.954 [2024-04-17 14:48:53.361884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:44.954 [2024-04-17 14:48:53.361902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.340 ms 00:29:44.954 [2024-04-17 14:48:53.361913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.954 [2024-04-17 14:48:53.362024] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:44.954 [2024-04-17 14:48:53.362046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.362991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.363003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.363016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.363029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.363041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.363054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.363066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.363083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.363103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:44.955 [2024-04-17 14:48:53.363126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:44.956 [2024-04-17 14:48:53.363550] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:44.956 [2024-04-17 14:48:53.363568] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b2ea0b9-3b71-4165-8192-0f28abbb9f7b 00:29:44.956 [2024-04-17 14:48:53.363580] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:44.956 [2024-04-17 14:48:53.363591] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:44.956 [2024-04-17 14:48:53.363601] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:44.956 [2024-04-17 14:48:53.363613] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:44.956 [2024-04-17 14:48:53.363624] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:44.956 [2024-04-17 14:48:53.363635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:44.956 [2024-04-17 14:48:53.363647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:44.956 [2024-04-17 14:48:53.363657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:44.956 [2024-04-17 14:48:53.363667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:44.956 [2024-04-17 14:48:53.363679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.956 [2024-04-17 14:48:53.363691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:44.956 [2024-04-17 14:48:53.363704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:29:44.956 [2024-04-17 14:48:53.363715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.956 [2024-04-17 14:48:53.385891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.956 [2024-04-17 14:48:53.385941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:44.956 [2024-04-17 14:48:53.385974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.139 ms 00:29:44.956 [2024-04-17 14:48:53.385986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.956 [2024-04-17 14:48:53.386304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.956 [2024-04-17 14:48:53.386329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:44.956 [2024-04-17 14:48:53.386354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:29:44.956 [2024-04-17 14:48:53.386372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.956 [2024-04-17 14:48:53.450483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.956 [2024-04-17 14:48:53.450552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:44.956 [2024-04-17 14:48:53.450570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.956 [2024-04-17 14:48:53.450582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.956 [2024-04-17 14:48:53.450710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.956 [2024-04-17 14:48:53.450724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:44.956 [2024-04-17 14:48:53.450736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.956 [2024-04-17 14:48:53.450755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.956 [2024-04-17 14:48:53.450821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.956 [2024-04-17 14:48:53.450842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:44.956 [2024-04-17 14:48:53.450860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.956 [2024-04-17 14:48:53.450879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.956 [2024-04-17 14:48:53.450903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.956 [2024-04-17 14:48:53.450916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:44.956 [2024-04-17 14:48:53.450928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.956 [2024-04-17 14:48:53.450940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.227 [2024-04-17 14:48:53.583526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.227 [2024-04-17 14:48:53.583595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:45.227 [2024-04-17 14:48:53.583613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.228 [2024-04-17 14:48:53.583626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.228 [2024-04-17 14:48:53.638331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.228 [2024-04-17 14:48:53.638422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:45.228 [2024-04-17 14:48:53.638439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.228 [2024-04-17 14:48:53.638460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.228 [2024-04-17 14:48:53.638574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.228 [2024-04-17 14:48:53.638590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:45.228 [2024-04-17 14:48:53.638603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.228 [2024-04-17 14:48:53.638615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.228 [2024-04-17 14:48:53.638647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.228 [2024-04-17 14:48:53.638660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:45.228 [2024-04-17 14:48:53.638672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.228 [2024-04-17 14:48:53.638683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.228 [2024-04-17 14:48:53.638825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.228 [2024-04-17 14:48:53.638850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:45.228 [2024-04-17 14:48:53.638863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.228 [2024-04-17 14:48:53.638875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.228 [2024-04-17 14:48:53.638922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.228 [2024-04-17 14:48:53.638940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:45.228 [2024-04-17 14:48:53.638959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.228 [2024-04-17 14:48:53.638981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.228 [2024-04-17 14:48:53.639028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.228 [2024-04-17 14:48:53.639041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:45.228 [2024-04-17 14:48:53.639053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.228 [2024-04-17 14:48:53.639065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.228 [2024-04-17 14:48:53.639114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:45.228 [2024-04-17 14:48:53.639127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:45.228 [2024-04-17 14:48:53.639138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:45.228 [2024-04-17 14:48:53.639150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:45.228 [2024-04-17 14:48:53.639327] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.015 ms, result 0 00:29:46.668 00:29:46.668 00:29:46.668 14:48:55 -- ftl/trim.sh@72 -- # svcpid=79248 00:29:46.668 14:48:55 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:29:46.668 14:48:55 -- ftl/trim.sh@73 -- # waitforlisten 79248 00:29:46.668 14:48:55 -- common/autotest_common.sh@817 -- # '[' -z 79248 ']' 00:29:46.668 14:48:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.668 14:48:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:46.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.668 14:48:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.668 14:48:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:46.668 14:48:55 -- common/autotest_common.sh@10 -- # set +x 00:29:46.668 [2024-04-17 14:48:55.211318] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:29:46.668 [2024-04-17 14:48:55.211828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79248 ] 00:29:46.966 [2024-04-17 14:48:55.381192] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.239 [2024-04-17 14:48:55.646131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.177 14:48:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:48.177 14:48:56 -- common/autotest_common.sh@850 -- # return 0 00:29:48.177 14:48:56 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:29:48.435 [2024-04-17 14:48:56.948826] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:48.435 [2024-04-17 14:48:56.948899] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:48.695 [2024-04-17 14:48:57.126951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.695 [2024-04-17 14:48:57.127016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:48.695 [2024-04-17 14:48:57.127040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:48.695 [2024-04-17 14:48:57.127052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.695 [2024-04-17 14:48:57.130722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.695 [2024-04-17 14:48:57.130766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:48.695 [2024-04-17 14:48:57.130784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.644 ms 00:29:48.695 [2024-04-17 14:48:57.130799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.695 [2024-04-17 14:48:57.130923] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:48.695 [2024-04-17 14:48:57.132301] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:48.695 [2024-04-17 14:48:57.132346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.695 [2024-04-17 14:48:57.132362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:48.695 [2024-04-17 14:48:57.132381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:29:48.695 [2024-04-17 14:48:57.132393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.695 [2024-04-17 14:48:57.134106] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:48.695 [2024-04-17 14:48:57.156217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.695 [2024-04-17 14:48:57.156270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:48.695 [2024-04-17 14:48:57.156287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.115 ms 00:29:48.695 [2024-04-17 14:48:57.156302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.695 [2024-04-17 14:48:57.156413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.695 [2024-04-17 14:48:57.156434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:48.695 [2024-04-17 14:48:57.156446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:48.695 [2024-04-17 14:48:57.156460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.695 [2024-04-17 14:48:57.163701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.695 [2024-04-17 14:48:57.163753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:48.695 [2024-04-17 14:48:57.163783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.175 ms 00:29:48.695 [2024-04-17 14:48:57.163797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.695 [2024-04-17 14:48:57.163909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.695 [2024-04-17 14:48:57.163931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:48.696 [2024-04-17 14:48:57.163943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:48.696 [2024-04-17 14:48:57.163956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.696 [2024-04-17 14:48:57.163987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.696 [2024-04-17 14:48:57.164002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:48.696 [2024-04-17 14:48:57.164013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:48.696 [2024-04-17 14:48:57.164026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.696 [2024-04-17 14:48:57.164055] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:48.696 [2024-04-17 14:48:57.170582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.696 [2024-04-17 14:48:57.170620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:48.696 [2024-04-17 14:48:57.170638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.531 ms 00:29:48.696 [2024-04-17 14:48:57.170650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.696 [2024-04-17 14:48:57.170743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.696 [2024-04-17 14:48:57.170758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:48.696 [2024-04-17 14:48:57.170773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:48.696 [2024-04-17 14:48:57.170784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.696 [2024-04-17 14:48:57.170820] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:48.696 [2024-04-17 14:48:57.170846] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:29:48.696 [2024-04-17 14:48:57.170900] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:48.696 [2024-04-17 14:48:57.170936] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:29:48.696 [2024-04-17 14:48:57.171019] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:29:48.696 [2024-04-17 14:48:57.171035] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:48.696 [2024-04-17 14:48:57.171053] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:29:48.696 [2024-04-17 14:48:57.171068] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:48.696 [2024-04-17 14:48:57.171085] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:48.696 [2024-04-17 14:48:57.171098] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:48.696 [2024-04-17 14:48:57.171112] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:48.696 [2024-04-17 14:48:57.171123] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:29:48.696 [2024-04-17 14:48:57.171137] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:29:48.696 [2024-04-17 14:48:57.171148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.696 [2024-04-17 14:48:57.171165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:48.696 [2024-04-17 14:48:57.171179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:29:48.696 [2024-04-17 14:48:57.171195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.696 [2024-04-17 14:48:57.171265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.696 [2024-04-17 14:48:57.171280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:48.696 [2024-04-17 14:48:57.171292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:29:48.696 [2024-04-17 14:48:57.171306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.696 [2024-04-17 14:48:57.171394] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:48.696 [2024-04-17 14:48:57.171421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:48.696 [2024-04-17 14:48:57.171440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:48.696 [2024-04-17 14:48:57.171457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:48.696 [2024-04-17 14:48:57.171480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:48.696 [2024-04-17 14:48:57.171493] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:48.696 [2024-04-17 14:48:57.171503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:48.696 [2024-04-17 14:48:57.171531] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:48.696 [2024-04-17 14:48:57.171542] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:48.696 [2024-04-17 14:48:57.171555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:48.696 [2024-04-17 14:48:57.171565] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:48.696 [2024-04-17 14:48:57.171581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:48.696 [2024-04-17 14:48:57.171592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:48.696 [2024-04-17 14:48:57.171605] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:48.696 [2024-04-17 14:48:57.171615] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:29:48.696 [2024-04-17 14:48:57.171628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:48.696 [2024-04-17 14:48:57.171638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:48.696 [2024-04-17 14:48:57.171651] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:29:48.696 [2024-04-17 14:48:57.171661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:48.696 [2024-04-17 14:48:57.171685] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:29:48.696 [2024-04-17 14:48:57.171695] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:29:48.696 [2024-04-17 14:48:57.171708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:29:48.696 [2024-04-17 14:48:57.171718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:48.696 [2024-04-17 14:48:57.171731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:48.696 [2024-04-17 14:48:57.171741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:48.696 [2024-04-17 14:48:57.171753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:48.696 [2024-04-17 14:48:57.171764] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:29:48.696 [2024-04-17 14:48:57.171781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:48.696 [2024-04-17 14:48:57.171791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:48.696 [2024-04-17 14:48:57.171803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:48.696 [2024-04-17 14:48:57.171813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:48.696 [2024-04-17 14:48:57.171826] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:48.696 [2024-04-17 14:48:57.171840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:29:48.696 [2024-04-17 14:48:57.171860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:48.696 [2024-04-17 14:48:57.171872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:48.696 [2024-04-17 14:48:57.171884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:48.696 [2024-04-17 14:48:57.171894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:48.696 [2024-04-17 14:48:57.171907] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:48.696 [2024-04-17 14:48:57.171917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:29:48.696 [2024-04-17 14:48:57.171929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:48.696 [2024-04-17 14:48:57.171939] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:48.696 [2024-04-17 14:48:57.171952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:48.696 [2024-04-17 14:48:57.171963] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:48.696 [2024-04-17 14:48:57.171978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:48.696 [2024-04-17 14:48:57.171989] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:48.696 [2024-04-17 14:48:57.172003] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:48.696 [2024-04-17 14:48:57.172014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:48.696 [2024-04-17 14:48:57.172026] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:48.696 [2024-04-17 14:48:57.172036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:48.696 [2024-04-17 14:48:57.172049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:48.696 [2024-04-17 14:48:57.172061] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:48.697 [2024-04-17 14:48:57.172077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:48.697 [2024-04-17 14:48:57.172090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:48.697 [2024-04-17 14:48:57.172104] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:29:48.697 [2024-04-17 14:48:57.172115] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:29:48.697 [2024-04-17 14:48:57.172130] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:29:48.697 [2024-04-17 14:48:57.172142] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:29:48.697 [2024-04-17 14:48:57.172156] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:29:48.697 [2024-04-17 14:48:57.172168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:29:48.697 [2024-04-17 14:48:57.172185] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:29:48.697 [2024-04-17 14:48:57.172196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:29:48.697 [2024-04-17 14:48:57.172209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:29:48.697 [2024-04-17 14:48:57.172221] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:29:48.697 [2024-04-17 14:48:57.172234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:29:48.697 [2024-04-17 14:48:57.172246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:29:48.697 [2024-04-17 14:48:57.172263] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:48.697 [2024-04-17 14:48:57.172280] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:48.697 [2024-04-17 14:48:57.172309] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:48.697 [2024-04-17 14:48:57.172327] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:48.697 [2024-04-17 14:48:57.172351] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:48.697 [2024-04-17 14:48:57.172365] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:48.697 [2024-04-17 14:48:57.172380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.697 [2024-04-17 14:48:57.172394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:48.697 [2024-04-17 14:48:57.172411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:29:48.697 [2024-04-17 14:48:57.172422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.697 [2024-04-17 14:48:57.201720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.697 [2024-04-17 14:48:57.201766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:48.697 [2024-04-17 14:48:57.201794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.234 ms 00:29:48.697 [2024-04-17 14:48:57.201807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.697 [2024-04-17 14:48:57.201969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.697 [2024-04-17 14:48:57.201983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:48.697 [2024-04-17 14:48:57.201998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:48.697 [2024-04-17 14:48:57.202010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.697 [2024-04-17 14:48:57.262185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.697 [2024-04-17 14:48:57.262236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:48.697 [2024-04-17 14:48:57.262275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.144 ms 00:29:48.697 [2024-04-17 14:48:57.262286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.697 [2024-04-17 14:48:57.262409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.697 [2024-04-17 14:48:57.262423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:48.697 [2024-04-17 14:48:57.262448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:48.697 [2024-04-17 14:48:57.262459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.697 [2024-04-17 14:48:57.262956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.697 [2024-04-17 14:48:57.262974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:48.697 [2024-04-17 14:48:57.262988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:29:48.697 [2024-04-17 14:48:57.263003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.697 [2024-04-17 14:48:57.263128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.697 [2024-04-17 14:48:57.263142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:48.697 [2024-04-17 14:48:57.263156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:29:48.697 [2024-04-17 14:48:57.263173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.697 [2024-04-17 14:48:57.289532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.697 [2024-04-17 14:48:57.289586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:48.697 [2024-04-17 14:48:57.289604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.318 ms 00:29:48.697 [2024-04-17 14:48:57.289615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.956 [2024-04-17 14:48:57.311746] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:29:48.956 [2024-04-17 14:48:57.311804] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:48.956 [2024-04-17 14:48:57.311825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.956 [2024-04-17 14:48:57.311837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:48.956 [2024-04-17 14:48:57.311857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.024 ms 00:29:48.956 [2024-04-17 14:48:57.311868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.956 [2024-04-17 14:48:57.347201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.956 [2024-04-17 14:48:57.347301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:48.956 [2024-04-17 14:48:57.347323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.211 ms 00:29:48.956 [2024-04-17 14:48:57.347336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.956 [2024-04-17 14:48:57.370041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.957 [2024-04-17 14:48:57.370103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:48.957 [2024-04-17 14:48:57.370122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.555 ms 00:29:48.957 [2024-04-17 14:48:57.370134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.957 [2024-04-17 14:48:57.391875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.957 [2024-04-17 14:48:57.391954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:48.957 [2024-04-17 14:48:57.391974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.601 ms 00:29:48.957 [2024-04-17 14:48:57.391998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.957 [2024-04-17 14:48:57.392606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.957 [2024-04-17 14:48:57.392635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:48.957 [2024-04-17 14:48:57.392654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:29:48.957 [2024-04-17 14:48:57.392665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.957 [2024-04-17 14:48:57.494784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.957 [2024-04-17 14:48:57.494854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:48.957 [2024-04-17 14:48:57.494876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.081 ms 00:29:48.957 [2024-04-17 14:48:57.494888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.957 [2024-04-17 14:48:57.509589] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:48.957 [2024-04-17 14:48:57.527036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.957 [2024-04-17 14:48:57.527102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:48.957 [2024-04-17 14:48:57.527118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.010 ms 00:29:48.957 [2024-04-17 14:48:57.527136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.957 [2024-04-17 14:48:57.527251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.957 [2024-04-17 14:48:57.527268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:48.957 [2024-04-17 14:48:57.527280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:48.957 [2024-04-17 14:48:57.527298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.957 [2024-04-17 14:48:57.527352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.957 [2024-04-17 14:48:57.527370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:48.957 [2024-04-17 14:48:57.527381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:48.957 [2024-04-17 14:48:57.527395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.957 [2024-04-17 14:48:57.529799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.957 [2024-04-17 14:48:57.529840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:29:48.957 [2024-04-17 14:48:57.529853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.377 ms 00:29:48.957 [2024-04-17 14:48:57.529867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.957 [2024-04-17 14:48:57.529903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.957 [2024-04-17 14:48:57.529917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:48.957 [2024-04-17 14:48:57.529928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:48.957 [2024-04-17 14:48:57.529942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:48.957 [2024-04-17 14:48:57.529980] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:48.957 [2024-04-17 14:48:57.529998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:48.957 [2024-04-17 14:48:57.530012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:48.957 [2024-04-17 14:48:57.530026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:48.957 [2024-04-17 14:48:57.530037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.216 [2024-04-17 14:48:57.575734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.216 [2024-04-17 14:48:57.575792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:49.216 [2024-04-17 14:48:57.575812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.661 ms 00:29:49.216 [2024-04-17 14:48:57.575824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.216 [2024-04-17 14:48:57.575983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.216 [2024-04-17 14:48:57.575997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:49.216 [2024-04-17 14:48:57.576012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:49.216 [2024-04-17 14:48:57.576024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.216 [2024-04-17 14:48:57.577098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:49.216 [2024-04-17 14:48:57.583897] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 449.816 ms, result 0 00:29:49.216 [2024-04-17 14:48:57.585109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:49.216 Some configs were skipped because the RPC state that can call them passed over. 00:29:49.216 14:48:57 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:29:49.475 [2024-04-17 14:48:57.938985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.475 [2024-04-17 14:48:57.939070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:29:49.475 [2024-04-17 14:48:57.939087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.016 ms 00:29:49.475 [2024-04-17 14:48:57.939117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.475 [2024-04-17 14:48:57.939161] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 42.196 ms, result 0 00:29:49.475 true 00:29:49.475 14:48:57 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:29:49.734 [2024-04-17 14:48:58.282629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.734 [2024-04-17 14:48:58.282694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:29:49.734 [2024-04-17 14:48:58.282716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.087 ms 00:29:49.734 [2024-04-17 14:48:58.282728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.734 [2024-04-17 14:48:58.282776] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 47.261 ms, result 0 00:29:49.734 true 00:29:49.734 14:48:58 -- ftl/trim.sh@81 -- # killprocess 79248 00:29:49.734 14:48:58 -- common/autotest_common.sh@936 -- # '[' -z 79248 ']' 00:29:49.734 14:48:58 -- common/autotest_common.sh@940 -- # kill -0 79248 00:29:49.734 14:48:58 -- common/autotest_common.sh@941 -- # uname 00:29:49.734 14:48:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:49.734 14:48:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79248 00:29:49.734 killing process with pid 79248 00:29:49.734 14:48:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:49.734 14:48:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:49.734 14:48:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79248' 00:29:49.734 14:48:58 -- common/autotest_common.sh@955 -- # kill 79248 00:29:49.734 14:48:58 -- common/autotest_common.sh@960 -- # wait 79248 00:29:51.113 [2024-04-17 14:48:59.577979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.578057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:51.113 [2024-04-17 14:48:59.578075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:51.113 [2024-04-17 14:48:59.578090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.578115] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:51.113 [2024-04-17 14:48:59.582171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.582214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:51.113 [2024-04-17 14:48:59.582232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.031 ms 00:29:51.113 [2024-04-17 14:48:59.582248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.582588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.582613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:51.113 [2024-04-17 14:48:59.582632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:29:51.113 [2024-04-17 14:48:59.582657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.586453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.586502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:51.113 [2024-04-17 14:48:59.586521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.768 ms 00:29:51.113 [2024-04-17 14:48:59.586533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.593114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.593151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:29:51.113 [2024-04-17 14:48:59.593171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.531 ms 00:29:51.113 [2024-04-17 14:48:59.593181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.611882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.611960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:51.113 [2024-04-17 14:48:59.611981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.626 ms 00:29:51.113 [2024-04-17 14:48:59.611993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.624098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.624174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:51.113 [2024-04-17 14:48:59.624199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.991 ms 00:29:51.113 [2024-04-17 14:48:59.624211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.624429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.624449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:51.113 [2024-04-17 14:48:59.624469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:29:51.113 [2024-04-17 14:48:59.624481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.643283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.643349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:51.113 [2024-04-17 14:48:59.643370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.761 ms 00:29:51.113 [2024-04-17 14:48:59.643382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.662265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.662336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:51.113 [2024-04-17 14:48:59.662363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.804 ms 00:29:51.113 [2024-04-17 14:48:59.662375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.680349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.680406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:51.113 [2024-04-17 14:48:59.680446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.857 ms 00:29:51.113 [2024-04-17 14:48:59.680457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.697776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.113 [2024-04-17 14:48:59.697832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:51.113 [2024-04-17 14:48:59.697851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.214 ms 00:29:51.113 [2024-04-17 14:48:59.697863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.113 [2024-04-17 14:48:59.697920] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:51.113 [2024-04-17 14:48:59.697940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:51.113 [2024-04-17 14:48:59.697957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:51.113 [2024-04-17 14:48:59.697970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:51.113 [2024-04-17 14:48:59.697984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.697996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:51.114 [2024-04-17 14:48:59.698372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:51.137 [2024-04-17 14:48:59.698955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.698967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.698984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.698997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:51.138 [2024-04-17 14:48:59.699558] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:51.138 [2024-04-17 14:48:59.699573] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b2ea0b9-3b71-4165-8192-0f28abbb9f7b 00:29:51.138 [2024-04-17 14:48:59.699586] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:51.138 [2024-04-17 14:48:59.699600] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:51.138 [2024-04-17 14:48:59.699611] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:51.138 [2024-04-17 14:48:59.699640] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:51.138 [2024-04-17 14:48:59.699650] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:51.138 [2024-04-17 14:48:59.699668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:51.138 [2024-04-17 14:48:59.699679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:51.138 [2024-04-17 14:48:59.699691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:51.138 [2024-04-17 14:48:59.699702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:51.138 [2024-04-17 14:48:59.699724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.138 [2024-04-17 14:48:59.699741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:51.138 [2024-04-17 14:48:59.699763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:29:51.138 [2024-04-17 14:48:59.699781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:48:59.722688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.400 [2024-04-17 14:48:59.722745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:51.400 [2024-04-17 14:48:59.722765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.867 ms 00:29:51.400 [2024-04-17 14:48:59.722780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:48:59.723141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.400 [2024-04-17 14:48:59.723159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:51.400 [2024-04-17 14:48:59.723178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:29:51.400 [2024-04-17 14:48:59.723190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:48:59.803511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:48:59.803580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:51.400 [2024-04-17 14:48:59.803605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:48:59.803627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:48:59.803775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:48:59.803789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:51.400 [2024-04-17 14:48:59.803804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:48:59.803815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:48:59.803879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:48:59.803893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:51.400 [2024-04-17 14:48:59.803908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:48:59.803923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:48:59.803948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:48:59.803960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:51.400 [2024-04-17 14:48:59.803978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:48:59.803995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:48:59.946472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:48:59.946540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:51.400 [2024-04-17 14:48:59.946561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:48:59.946577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:48:59.999888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:48:59.999947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:51.400 [2024-04-17 14:48:59.999966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:48:59.999977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:49:00.000070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:49:00.000084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:51.400 [2024-04-17 14:49:00.000098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:49:00.000109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:49:00.000177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:49:00.000198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:51.400 [2024-04-17 14:49:00.000223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:49:00.000235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:49:00.000367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:49:00.000382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:51.400 [2024-04-17 14:49:00.000397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:49:00.000408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:49:00.000454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:49:00.000471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:51.400 [2024-04-17 14:49:00.000486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:49:00.000497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:49:00.000562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:49:00.000575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:51.400 [2024-04-17 14:49:00.000589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:49:00.000602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:49:00.000655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.400 [2024-04-17 14:49:00.000668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:51.400 [2024-04-17 14:49:00.000687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.400 [2024-04-17 14:49:00.000717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.400 [2024-04-17 14:49:00.000874] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 422.871 ms, result 0 00:29:53.333 14:49:01 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:29:53.333 14:49:01 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:53.333 [2024-04-17 14:49:01.549651] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:29:53.333 [2024-04-17 14:49:01.550246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79334 ] 00:29:53.333 [2024-04-17 14:49:01.713371] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.604 [2024-04-17 14:49:01.969359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.877 [2024-04-17 14:49:02.418187] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:53.877 [2024-04-17 14:49:02.418275] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:54.140 [2024-04-17 14:49:02.581069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.140 [2024-04-17 14:49:02.581152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:54.140 [2024-04-17 14:49:02.581174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:54.140 [2024-04-17 14:49:02.581186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.140 [2024-04-17 14:49:02.585110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.140 [2024-04-17 14:49:02.585172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:54.140 [2024-04-17 14:49:02.585203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.886 ms 00:29:54.140 [2024-04-17 14:49:02.585220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.140 [2024-04-17 14:49:02.585370] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:54.140 [2024-04-17 14:49:02.586820] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:54.140 [2024-04-17 14:49:02.586863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.140 [2024-04-17 14:49:02.586881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:54.140 [2024-04-17 14:49:02.586895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:29:54.140 [2024-04-17 14:49:02.586907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.140 [2024-04-17 14:49:02.588721] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:54.140 [2024-04-17 14:49:02.610997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.140 [2024-04-17 14:49:02.611096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:54.140 [2024-04-17 14:49:02.611124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.271 ms 00:29:54.140 [2024-04-17 14:49:02.611141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.140 [2024-04-17 14:49:02.611408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.140 [2024-04-17 14:49:02.611444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:54.140 [2024-04-17 14:49:02.611488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:54.140 [2024-04-17 14:49:02.611502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.140 [2024-04-17 14:49:02.620401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.140 [2024-04-17 14:49:02.620511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:54.140 [2024-04-17 14:49:02.620535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.763 ms 00:29:54.140 [2024-04-17 14:49:02.620551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.140 [2024-04-17 14:49:02.620787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.140 [2024-04-17 14:49:02.620820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:54.140 [2024-04-17 14:49:02.620837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:29:54.140 [2024-04-17 14:49:02.620851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.140 [2024-04-17 14:49:02.620920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.140 [2024-04-17 14:49:02.620936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:54.140 [2024-04-17 14:49:02.620953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:54.140 [2024-04-17 14:49:02.620967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.140 [2024-04-17 14:49:02.621011] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:54.141 [2024-04-17 14:49:02.626311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.141 [2024-04-17 14:49:02.626371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:54.141 [2024-04-17 14:49:02.626393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.310 ms 00:29:54.141 [2024-04-17 14:49:02.626409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.141 [2024-04-17 14:49:02.626538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.141 [2024-04-17 14:49:02.626560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:54.141 [2024-04-17 14:49:02.626578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:54.141 [2024-04-17 14:49:02.626594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.141 [2024-04-17 14:49:02.626631] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:54.141 [2024-04-17 14:49:02.626667] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:29:54.141 [2024-04-17 14:49:02.626714] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:54.141 [2024-04-17 14:49:02.626745] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:29:54.141 [2024-04-17 14:49:02.626832] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:29:54.141 [2024-04-17 14:49:02.626852] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:54.141 [2024-04-17 14:49:02.626871] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:29:54.141 [2024-04-17 14:49:02.626892] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:54.141 [2024-04-17 14:49:02.626911] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:54.141 [2024-04-17 14:49:02.626928] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:54.141 [2024-04-17 14:49:02.626944] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:54.141 [2024-04-17 14:49:02.626959] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:29:54.141 [2024-04-17 14:49:02.626974] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:29:54.141 [2024-04-17 14:49:02.626991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.141 [2024-04-17 14:49:02.627012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:54.141 [2024-04-17 14:49:02.627033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:29:54.141 [2024-04-17 14:49:02.627048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.141 [2024-04-17 14:49:02.627133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.141 [2024-04-17 14:49:02.627151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:54.141 [2024-04-17 14:49:02.627167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:54.141 [2024-04-17 14:49:02.627182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.141 [2024-04-17 14:49:02.627276] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:54.141 [2024-04-17 14:49:02.627296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:54.141 [2024-04-17 14:49:02.627317] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:54.141 [2024-04-17 14:49:02.627333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.141 [2024-04-17 14:49:02.627349] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:54.141 [2024-04-17 14:49:02.627364] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:54.141 [2024-04-17 14:49:02.627379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:54.141 [2024-04-17 14:49:02.627394] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:54.141 [2024-04-17 14:49:02.627410] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:54.141 [2024-04-17 14:49:02.627425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:54.141 [2024-04-17 14:49:02.627455] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:54.141 [2024-04-17 14:49:02.627470] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:54.141 [2024-04-17 14:49:02.627485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:54.141 [2024-04-17 14:49:02.627517] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:54.141 [2024-04-17 14:49:02.627533] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:29:54.141 [2024-04-17 14:49:02.627548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.141 [2024-04-17 14:49:02.627562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:54.141 [2024-04-17 14:49:02.627577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:29:54.141 [2024-04-17 14:49:02.627592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.141 [2024-04-17 14:49:02.627607] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:29:54.141 [2024-04-17 14:49:02.627621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:29:54.141 [2024-04-17 14:49:02.627636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:29:54.141 [2024-04-17 14:49:02.627651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:54.141 [2024-04-17 14:49:02.627666] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:54.141 [2024-04-17 14:49:02.627681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:54.141 [2024-04-17 14:49:02.627706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:54.141 [2024-04-17 14:49:02.627720] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:29:54.141 [2024-04-17 14:49:02.627734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:54.141 [2024-04-17 14:49:02.627749] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:54.141 [2024-04-17 14:49:02.627763] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:54.141 [2024-04-17 14:49:02.627777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:54.141 [2024-04-17 14:49:02.627791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:54.141 [2024-04-17 14:49:02.627805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:29:54.141 [2024-04-17 14:49:02.627819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:29:54.141 [2024-04-17 14:49:02.627833] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:54.141 [2024-04-17 14:49:02.627847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:54.141 [2024-04-17 14:49:02.627861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:54.141 [2024-04-17 14:49:02.627875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:54.141 [2024-04-17 14:49:02.627889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:29:54.141 [2024-04-17 14:49:02.627902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:54.141 [2024-04-17 14:49:02.627916] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:54.141 [2024-04-17 14:49:02.627931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:54.141 [2024-04-17 14:49:02.627946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:54.141 [2024-04-17 14:49:02.627960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.141 [2024-04-17 14:49:02.627975] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:54.141 [2024-04-17 14:49:02.627989] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:54.141 [2024-04-17 14:49:02.628003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:54.141 [2024-04-17 14:49:02.628018] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:54.141 [2024-04-17 14:49:02.628031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:54.141 [2024-04-17 14:49:02.628045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:54.141 [2024-04-17 14:49:02.628061] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:54.141 [2024-04-17 14:49:02.628078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:54.141 [2024-04-17 14:49:02.628095] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:54.141 [2024-04-17 14:49:02.628111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:29:54.141 [2024-04-17 14:49:02.628127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:29:54.142 [2024-04-17 14:49:02.628143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:29:54.142 [2024-04-17 14:49:02.628159] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:29:54.142 [2024-04-17 14:49:02.628174] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:29:54.142 [2024-04-17 14:49:02.628190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:29:54.142 [2024-04-17 14:49:02.628207] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:29:54.142 [2024-04-17 14:49:02.628223] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:29:54.142 [2024-04-17 14:49:02.628238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:29:54.142 [2024-04-17 14:49:02.628254] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:29:54.142 [2024-04-17 14:49:02.628270] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:29:54.142 [2024-04-17 14:49:02.628286] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:29:54.142 [2024-04-17 14:49:02.628301] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:54.142 [2024-04-17 14:49:02.628318] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:54.142 [2024-04-17 14:49:02.628336] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:54.142 [2024-04-17 14:49:02.628353] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:54.142 [2024-04-17 14:49:02.628368] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:54.142 [2024-04-17 14:49:02.628384] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:54.142 [2024-04-17 14:49:02.628402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.142 [2024-04-17 14:49:02.628423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:54.142 [2024-04-17 14:49:02.628438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:29:54.142 [2024-04-17 14:49:02.628452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.142 [2024-04-17 14:49:02.653426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.142 [2024-04-17 14:49:02.653506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:54.142 [2024-04-17 14:49:02.653528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.887 ms 00:29:54.142 [2024-04-17 14:49:02.653543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.142 [2024-04-17 14:49:02.653722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.142 [2024-04-17 14:49:02.653740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:54.142 [2024-04-17 14:49:02.653756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:54.142 [2024-04-17 14:49:02.653769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.142 [2024-04-17 14:49:02.722089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.142 [2024-04-17 14:49:02.722169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:54.142 [2024-04-17 14:49:02.722187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.287 ms 00:29:54.142 [2024-04-17 14:49:02.722199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.142 [2024-04-17 14:49:02.722317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.142 [2024-04-17 14:49:02.722331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:54.142 [2024-04-17 14:49:02.722344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:54.142 [2024-04-17 14:49:02.722364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.142 [2024-04-17 14:49:02.722860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.142 [2024-04-17 14:49:02.722889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:54.142 [2024-04-17 14:49:02.722904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:29:54.142 [2024-04-17 14:49:02.722915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.142 [2024-04-17 14:49:02.723051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.142 [2024-04-17 14:49:02.723067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:54.142 [2024-04-17 14:49:02.723080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:29:54.142 [2024-04-17 14:49:02.723091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-04-17 14:49:02.749449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-04-17 14:49:02.749527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:54.400 [2024-04-17 14:49:02.749546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.328 ms 00:29:54.400 [2024-04-17 14:49:02.749558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-04-17 14:49:02.772843] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:29:54.400 [2024-04-17 14:49:02.772916] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:54.400 [2024-04-17 14:49:02.772934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-04-17 14:49:02.772946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:54.400 [2024-04-17 14:49:02.772961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.204 ms 00:29:54.400 [2024-04-17 14:49:02.772972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-04-17 14:49:02.810213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-04-17 14:49:02.810297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:54.400 [2024-04-17 14:49:02.810345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.085 ms 00:29:54.400 [2024-04-17 14:49:02.810366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-04-17 14:49:02.834089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-04-17 14:49:02.834162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:54.400 [2024-04-17 14:49:02.834180] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.545 ms 00:29:54.400 [2024-04-17 14:49:02.834192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-04-17 14:49:02.857901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-04-17 14:49:02.857986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:54.400 [2024-04-17 14:49:02.858005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.568 ms 00:29:54.400 [2024-04-17 14:49:02.858017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-04-17 14:49:02.858720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-04-17 14:49:02.858752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:54.400 [2024-04-17 14:49:02.858766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:29:54.400 [2024-04-17 14:49:02.858778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-04-17 14:49:02.962333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-04-17 14:49:02.962424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:54.400 [2024-04-17 14:49:02.962444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.519 ms 00:29:54.400 [2024-04-17 14:49:02.962473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-04-17 14:49:02.977448] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:54.400 [2024-04-17 14:49:02.996178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-04-17 14:49:02.996247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:54.400 [2024-04-17 14:49:02.996265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.534 ms 00:29:54.400 [2024-04-17 14:49:02.996277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.400 [2024-04-17 14:49:02.996405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.400 [2024-04-17 14:49:02.996420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:54.401 [2024-04-17 14:49:02.996432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:54.401 [2024-04-17 14:49:02.996443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.401 [2024-04-17 14:49:02.996531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.401 [2024-04-17 14:49:02.996545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:54.401 [2024-04-17 14:49:02.996557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:29:54.401 [2024-04-17 14:49:02.996569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.401 [2024-04-17 14:49:02.999028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.401 [2024-04-17 14:49:02.999077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:29:54.401 [2024-04-17 14:49:02.999091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.431 ms 00:29:54.401 [2024-04-17 14:49:02.999102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.401 [2024-04-17 14:49:02.999143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.401 [2024-04-17 14:49:02.999156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:54.401 [2024-04-17 14:49:02.999172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:54.401 [2024-04-17 14:49:02.999183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.401 [2024-04-17 14:49:02.999237] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:54.401 [2024-04-17 14:49:02.999251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.401 [2024-04-17 14:49:02.999263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:54.401 [2024-04-17 14:49:02.999275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:54.401 [2024-04-17 14:49:02.999286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.659 [2024-04-17 14:49:03.040563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.659 [2024-04-17 14:49:03.040629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:54.659 [2024-04-17 14:49:03.040646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.246 ms 00:29:54.659 [2024-04-17 14:49:03.040657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.659 [2024-04-17 14:49:03.040788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.659 [2024-04-17 14:49:03.040802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:54.659 [2024-04-17 14:49:03.040815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:54.659 [2024-04-17 14:49:03.040826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.659 [2024-04-17 14:49:03.041891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:54.659 [2024-04-17 14:49:03.047869] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 460.516 ms, result 0 00:29:54.659 [2024-04-17 14:49:03.048689] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:54.659 [2024-04-17 14:49:03.069161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:02.650  Copying: 32/256 [MB] (32 MBps) Copying: 62/256 [MB] (30 MBps) Copying: 93/256 [MB] (30 MBps) Copying: 123/256 [MB] (29 MBps) Copying: 154/256 [MB] (30 MBps) Copying: 185/256 [MB] (31 MBps) Copying: 219/256 [MB] (33 MBps) Copying: 252/256 [MB] (33 MBps) Copying: 256/256 [MB] (average 31 MBps)[2024-04-17 14:49:11.170859] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:02.650 [2024-04-17 14:49:11.187460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.650 [2024-04-17 14:49:11.187522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:02.650 [2024-04-17 14:49:11.187540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:02.651 [2024-04-17 14:49:11.187552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.651 [2024-04-17 14:49:11.187589] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:30:02.651 [2024-04-17 14:49:11.191803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.651 [2024-04-17 14:49:11.191839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:02.651 [2024-04-17 14:49:11.191852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.193 ms 00:30:02.651 [2024-04-17 14:49:11.191862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.651 [2024-04-17 14:49:11.192138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.651 [2024-04-17 14:49:11.192156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:02.651 [2024-04-17 14:49:11.192172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:30:02.651 [2024-04-17 14:49:11.192196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.651 [2024-04-17 14:49:11.195695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.651 [2024-04-17 14:49:11.195723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:02.651 [2024-04-17 14:49:11.195741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.478 ms 00:30:02.651 [2024-04-17 14:49:11.195753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.651 [2024-04-17 14:49:11.202114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.651 [2024-04-17 14:49:11.202155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:30:02.651 [2024-04-17 14:49:11.202168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.338 ms 00:30:02.651 [2024-04-17 14:49:11.202196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.651 [2024-04-17 14:49:11.245884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.651 [2024-04-17 14:49:11.245969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:02.651 [2024-04-17 14:49:11.245988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.601 ms 00:30:02.651 [2024-04-17 14:49:11.245998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.910 [2024-04-17 14:49:11.271313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.910 [2024-04-17 14:49:11.271384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:02.911 [2024-04-17 14:49:11.271403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.228 ms 00:30:02.911 [2024-04-17 14:49:11.271426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.911 [2024-04-17 14:49:11.271642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.911 [2024-04-17 14:49:11.271672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:02.911 [2024-04-17 14:49:11.271684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:30:02.911 [2024-04-17 14:49:11.271695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.911 [2024-04-17 14:49:11.314914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.911 [2024-04-17 14:49:11.314977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:02.911 [2024-04-17 14:49:11.314993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.184 ms 00:30:02.911 [2024-04-17 14:49:11.315022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.911 [2024-04-17 14:49:11.358622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.911 [2024-04-17 14:49:11.358687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:02.911 [2024-04-17 14:49:11.358706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.522 ms 00:30:02.911 [2024-04-17 14:49:11.358717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.911 [2024-04-17 14:49:11.402782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.911 [2024-04-17 14:49:11.402865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:02.911 [2024-04-17 14:49:11.402884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.983 ms 00:30:02.911 [2024-04-17 14:49:11.402895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.911 [2024-04-17 14:49:11.450651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.911 [2024-04-17 14:49:11.450734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:02.911 [2024-04-17 14:49:11.450754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.603 ms 00:30:02.911 [2024-04-17 14:49:11.450783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.911 [2024-04-17 14:49:11.450890] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:02.911 [2024-04-17 14:49:11.450926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.450941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.450954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.450967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.450980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.450993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:02.911 [2024-04-17 14:49:11.451777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.451994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:02.912 [2024-04-17 14:49:11.452476] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:02.912 [2024-04-17 14:49:11.452501] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b2ea0b9-3b71-4165-8192-0f28abbb9f7b 00:30:02.912 [2024-04-17 14:49:11.452515] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:02.912 [2024-04-17 14:49:11.452527] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:02.912 [2024-04-17 14:49:11.452539] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:02.912 [2024-04-17 14:49:11.452551] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:02.912 [2024-04-17 14:49:11.452562] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:02.912 [2024-04-17 14:49:11.452575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:02.912 [2024-04-17 14:49:11.452586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:02.912 [2024-04-17 14:49:11.452597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:02.912 [2024-04-17 14:49:11.452608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:02.912 [2024-04-17 14:49:11.452621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.912 [2024-04-17 14:49:11.452633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:02.912 [2024-04-17 14:49:11.452652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:30:02.912 [2024-04-17 14:49:11.452663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.912 [2024-04-17 14:49:11.475415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.912 [2024-04-17 14:49:11.475502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:02.912 [2024-04-17 14:49:11.475521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.700 ms 00:30:02.912 [2024-04-17 14:49:11.475534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:02.912 [2024-04-17 14:49:11.475924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:02.912 [2024-04-17 14:49:11.475965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:02.912 [2024-04-17 14:49:11.475979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:30:02.912 [2024-04-17 14:49:11.475991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.171 [2024-04-17 14:49:11.545594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.171 [2024-04-17 14:49:11.545654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:03.171 [2024-04-17 14:49:11.545671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.171 [2024-04-17 14:49:11.545683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.171 [2024-04-17 14:49:11.545814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.171 [2024-04-17 14:49:11.545834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:03.171 [2024-04-17 14:49:11.545846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.171 [2024-04-17 14:49:11.545857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.171 [2024-04-17 14:49:11.545926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.171 [2024-04-17 14:49:11.545940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:03.171 [2024-04-17 14:49:11.545952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.171 [2024-04-17 14:49:11.545963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.171 [2024-04-17 14:49:11.545991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.171 [2024-04-17 14:49:11.546011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:03.171 [2024-04-17 14:49:11.546034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.171 [2024-04-17 14:49:11.546053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.171 [2024-04-17 14:49:11.678134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.171 [2024-04-17 14:49:11.678197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:03.171 [2024-04-17 14:49:11.678215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.171 [2024-04-17 14:49:11.678226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.171 [2024-04-17 14:49:11.728582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.171 [2024-04-17 14:49:11.728655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:03.171 [2024-04-17 14:49:11.728671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.171 [2024-04-17 14:49:11.728683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.171 [2024-04-17 14:49:11.728779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.171 [2024-04-17 14:49:11.728793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:03.172 [2024-04-17 14:49:11.728805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.172 [2024-04-17 14:49:11.728815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.172 [2024-04-17 14:49:11.728847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.172 [2024-04-17 14:49:11.728859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:03.172 [2024-04-17 14:49:11.728870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.172 [2024-04-17 14:49:11.728885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.172 [2024-04-17 14:49:11.729003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.172 [2024-04-17 14:49:11.729023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:03.172 [2024-04-17 14:49:11.729041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.172 [2024-04-17 14:49:11.729057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.172 [2024-04-17 14:49:11.729121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.172 [2024-04-17 14:49:11.729142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:03.172 [2024-04-17 14:49:11.729153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.172 [2024-04-17 14:49:11.729164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.172 [2024-04-17 14:49:11.729224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.172 [2024-04-17 14:49:11.729238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:03.172 [2024-04-17 14:49:11.729249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.172 [2024-04-17 14:49:11.729260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.172 [2024-04-17 14:49:11.729308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:03.172 [2024-04-17 14:49:11.729320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:03.172 [2024-04-17 14:49:11.729331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:03.172 [2024-04-17 14:49:11.729346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.172 [2024-04-17 14:49:11.729528] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 542.065 ms, result 0 00:30:04.618 00:30:04.618 00:30:04.876 14:49:13 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:30:04.876 14:49:13 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:30:05.443 14:49:13 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:05.443 [2024-04-17 14:49:13.886678] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:30:05.443 [2024-04-17 14:49:13.887958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79461 ] 00:30:05.702 [2024-04-17 14:49:14.053540] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.016 [2024-04-17 14:49:14.326813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.275 [2024-04-17 14:49:14.774912] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:06.275 [2024-04-17 14:49:14.774985] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:06.535 [2024-04-17 14:49:14.932127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.932188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:06.535 [2024-04-17 14:49:14.932207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:06.535 [2024-04-17 14:49:14.932218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.935724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.935764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:06.535 [2024-04-17 14:49:14.935776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.485 ms 00:30:06.535 [2024-04-17 14:49:14.935789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.935906] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:06.535 [2024-04-17 14:49:14.937096] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:06.535 [2024-04-17 14:49:14.937131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.937146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:06.535 [2024-04-17 14:49:14.937158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:30:06.535 [2024-04-17 14:49:14.937168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.938889] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:06.535 [2024-04-17 14:49:14.961046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.961109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:06.535 [2024-04-17 14:49:14.961126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.154 ms 00:30:06.535 [2024-04-17 14:49:14.961137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.961277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.961292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:06.535 [2024-04-17 14:49:14.961307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:06.535 [2024-04-17 14:49:14.961316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.968346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.968379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:06.535 [2024-04-17 14:49:14.968391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.977 ms 00:30:06.535 [2024-04-17 14:49:14.968401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.968529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.968549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:06.535 [2024-04-17 14:49:14.968560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:30:06.535 [2024-04-17 14:49:14.968572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.968602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.968613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:06.535 [2024-04-17 14:49:14.968624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:06.535 [2024-04-17 14:49:14.968633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.968658] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:30:06.535 [2024-04-17 14:49:14.974737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.974769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:06.535 [2024-04-17 14:49:14.974797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.086 ms 00:30:06.535 [2024-04-17 14:49:14.974808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.974886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.974899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:06.535 [2024-04-17 14:49:14.974911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:06.535 [2024-04-17 14:49:14.974921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.974946] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:06.535 [2024-04-17 14:49:14.974972] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:30:06.535 [2024-04-17 14:49:14.975010] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:06.535 [2024-04-17 14:49:14.975032] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:30:06.535 [2024-04-17 14:49:14.975106] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:30:06.535 [2024-04-17 14:49:14.975120] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:06.535 [2024-04-17 14:49:14.975134] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:30:06.535 [2024-04-17 14:49:14.975148] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:06.535 [2024-04-17 14:49:14.975161] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:06.535 [2024-04-17 14:49:14.975173] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:30:06.535 [2024-04-17 14:49:14.975183] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:06.535 [2024-04-17 14:49:14.975193] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:30:06.535 [2024-04-17 14:49:14.975204] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:30:06.535 [2024-04-17 14:49:14.975215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.975229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:06.535 [2024-04-17 14:49:14.975243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:30:06.535 [2024-04-17 14:49:14.975254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.975320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.535 [2024-04-17 14:49:14.975347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:06.535 [2024-04-17 14:49:14.975358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:06.535 [2024-04-17 14:49:14.975368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.535 [2024-04-17 14:49:14.975445] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:06.535 [2024-04-17 14:49:14.975458] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:06.535 [2024-04-17 14:49:14.975473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:06.535 [2024-04-17 14:49:14.975483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:06.535 [2024-04-17 14:49:14.975506] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:06.535 [2024-04-17 14:49:14.975516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:06.535 [2024-04-17 14:49:14.975527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:30:06.535 [2024-04-17 14:49:14.975537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:06.535 [2024-04-17 14:49:14.975547] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:30:06.535 [2024-04-17 14:49:14.975557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:06.536 [2024-04-17 14:49:14.975579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:06.536 [2024-04-17 14:49:14.975590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:30:06.536 [2024-04-17 14:49:14.975600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:06.536 [2024-04-17 14:49:14.975610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:06.536 [2024-04-17 14:49:14.975620] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:30:06.536 [2024-04-17 14:49:14.975630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:06.536 [2024-04-17 14:49:14.975640] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:06.536 [2024-04-17 14:49:14.975650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:30:06.536 [2024-04-17 14:49:14.975660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:06.536 [2024-04-17 14:49:14.975670] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:30:06.536 [2024-04-17 14:49:14.975680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:30:06.536 [2024-04-17 14:49:14.975690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:30:06.536 [2024-04-17 14:49:14.975700] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:06.536 [2024-04-17 14:49:14.975710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:30:06.536 [2024-04-17 14:49:14.975720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:06.536 [2024-04-17 14:49:14.975729] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:06.536 [2024-04-17 14:49:14.975739] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:30:06.536 [2024-04-17 14:49:14.975749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:06.536 [2024-04-17 14:49:14.975759] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:06.536 [2024-04-17 14:49:14.975768] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:30:06.536 [2024-04-17 14:49:14.975778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:06.536 [2024-04-17 14:49:14.975788] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:06.536 [2024-04-17 14:49:14.975797] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:30:06.536 [2024-04-17 14:49:14.975807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:06.536 [2024-04-17 14:49:14.975817] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:06.536 [2024-04-17 14:49:14.975827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:30:06.536 [2024-04-17 14:49:14.975836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:06.536 [2024-04-17 14:49:14.975847] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:06.536 [2024-04-17 14:49:14.975857] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:30:06.536 [2024-04-17 14:49:14.975866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:06.536 [2024-04-17 14:49:14.975876] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:06.536 [2024-04-17 14:49:14.975886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:06.536 [2024-04-17 14:49:14.975898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:06.536 [2024-04-17 14:49:14.975908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:06.536 [2024-04-17 14:49:14.975919] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:06.536 [2024-04-17 14:49:14.975929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:06.536 [2024-04-17 14:49:14.975939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:06.536 [2024-04-17 14:49:14.975955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:06.536 [2024-04-17 14:49:14.975965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:06.536 [2024-04-17 14:49:14.975975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:06.536 [2024-04-17 14:49:14.975986] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:06.536 [2024-04-17 14:49:14.976015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:06.536 [2024-04-17 14:49:14.976028] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:30:06.536 [2024-04-17 14:49:14.976040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:30:06.536 [2024-04-17 14:49:14.976053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:30:06.536 [2024-04-17 14:49:14.976064] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:30:06.536 [2024-04-17 14:49:14.976076] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:30:06.536 [2024-04-17 14:49:14.976088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:30:06.536 [2024-04-17 14:49:14.976100] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:30:06.536 [2024-04-17 14:49:14.976112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:30:06.536 [2024-04-17 14:49:14.976124] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:30:06.536 [2024-04-17 14:49:14.976136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:30:06.536 [2024-04-17 14:49:14.976147] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:30:06.536 [2024-04-17 14:49:14.976160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:30:06.536 [2024-04-17 14:49:14.976172] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:30:06.536 [2024-04-17 14:49:14.976183] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:06.536 [2024-04-17 14:49:14.976196] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:06.536 [2024-04-17 14:49:14.976209] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:06.536 [2024-04-17 14:49:14.976221] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:06.536 [2024-04-17 14:49:14.976233] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:06.536 [2024-04-17 14:49:14.976245] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:06.536 [2024-04-17 14:49:14.976257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.536 [2024-04-17 14:49:14.976272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:06.536 [2024-04-17 14:49:14.976283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:30:06.536 [2024-04-17 14:49:14.976296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.536 [2024-04-17 14:49:15.004460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.536 [2024-04-17 14:49:15.004512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:06.536 [2024-04-17 14:49:15.004528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.108 ms 00:30:06.536 [2024-04-17 14:49:15.004538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.536 [2024-04-17 14:49:15.004668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.536 [2024-04-17 14:49:15.004680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:06.536 [2024-04-17 14:49:15.004691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:06.536 [2024-04-17 14:49:15.004701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.536 [2024-04-17 14:49:15.073973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.536 [2024-04-17 14:49:15.074028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:06.536 [2024-04-17 14:49:15.074044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.247 ms 00:30:06.536 [2024-04-17 14:49:15.074054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.536 [2024-04-17 14:49:15.074162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.536 [2024-04-17 14:49:15.074174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:06.537 [2024-04-17 14:49:15.074185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:06.537 [2024-04-17 14:49:15.074195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.537 [2024-04-17 14:49:15.074678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.537 [2024-04-17 14:49:15.074705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:06.537 [2024-04-17 14:49:15.074717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:30:06.537 [2024-04-17 14:49:15.074728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.537 [2024-04-17 14:49:15.074852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.537 [2024-04-17 14:49:15.074867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:06.537 [2024-04-17 14:49:15.074879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:30:06.537 [2024-04-17 14:49:15.074889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.537 [2024-04-17 14:49:15.099944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.537 [2024-04-17 14:49:15.099998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:06.537 [2024-04-17 14:49:15.100015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.028 ms 00:30:06.537 [2024-04-17 14:49:15.100027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.537 [2024-04-17 14:49:15.124366] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:30:06.537 [2024-04-17 14:49:15.124466] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:06.537 [2024-04-17 14:49:15.124485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.537 [2024-04-17 14:49:15.124519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:06.537 [2024-04-17 14:49:15.124536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.299 ms 00:30:06.537 [2024-04-17 14:49:15.124547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.162069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.162145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:06.796 [2024-04-17 14:49:15.162174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.371 ms 00:30:06.796 [2024-04-17 14:49:15.162186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.184008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.184071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:06.796 [2024-04-17 14:49:15.184088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.659 ms 00:30:06.796 [2024-04-17 14:49:15.184097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.204251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.204304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:06.796 [2024-04-17 14:49:15.204319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.024 ms 00:30:06.796 [2024-04-17 14:49:15.204329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.204946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.204970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:06.796 [2024-04-17 14:49:15.204984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:30:06.796 [2024-04-17 14:49:15.204996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.308580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.308649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:06.796 [2024-04-17 14:49:15.308667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.551 ms 00:30:06.796 [2024-04-17 14:49:15.308677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.323591] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:30:06.796 [2024-04-17 14:49:15.341044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.341103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:06.796 [2024-04-17 14:49:15.341119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.203 ms 00:30:06.796 [2024-04-17 14:49:15.341129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.341244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.341257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:06.796 [2024-04-17 14:49:15.341269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:06.796 [2024-04-17 14:49:15.341279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.341333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.341349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:06.796 [2024-04-17 14:49:15.341359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:06.796 [2024-04-17 14:49:15.341368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.343647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.343680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:30:06.796 [2024-04-17 14:49:15.343691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.259 ms 00:30:06.796 [2024-04-17 14:49:15.343701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.343733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.343744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:06.796 [2024-04-17 14:49:15.343759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:06.796 [2024-04-17 14:49:15.343769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.343802] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:06.796 [2024-04-17 14:49:15.343814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.343824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:06.796 [2024-04-17 14:49:15.343834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:06.796 [2024-04-17 14:49:15.343844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.387180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.387244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:06.796 [2024-04-17 14:49:15.387263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.310 ms 00:30:06.796 [2024-04-17 14:49:15.387276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.387434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.796 [2024-04-17 14:49:15.387450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:06.796 [2024-04-17 14:49:15.387463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:06.796 [2024-04-17 14:49:15.387474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.796 [2024-04-17 14:49:15.388560] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:06.796 [2024-04-17 14:49:15.395317] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 456.093 ms, result 0 00:30:06.796 [2024-04-17 14:49:15.396185] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:07.054 [2024-04-17 14:49:15.417466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:07.054  Copying: 4096/4096 [kB] (average 32 MBps)[2024-04-17 14:49:15.546317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:07.054 [2024-04-17 14:49:15.562908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.054 [2024-04-17 14:49:15.562964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:07.054 [2024-04-17 14:49:15.562982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:07.054 [2024-04-17 14:49:15.562994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.054 [2024-04-17 14:49:15.563025] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:30:07.054 [2024-04-17 14:49:15.567049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.054 [2024-04-17 14:49:15.567092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:07.054 [2024-04-17 14:49:15.567107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.005 ms 00:30:07.054 [2024-04-17 14:49:15.567119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.054 [2024-04-17 14:49:15.568843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.054 [2024-04-17 14:49:15.568882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:07.054 [2024-04-17 14:49:15.568896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:30:07.054 [2024-04-17 14:49:15.568908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.054 [2024-04-17 14:49:15.572668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.054 [2024-04-17 14:49:15.572701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:07.054 [2024-04-17 14:49:15.572714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:30:07.054 [2024-04-17 14:49:15.572730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.054 [2024-04-17 14:49:15.579126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.054 [2024-04-17 14:49:15.579170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:30:07.054 [2024-04-17 14:49:15.579184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.341 ms 00:30:07.054 [2024-04-17 14:49:15.579195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.054 [2024-04-17 14:49:15.633634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.054 [2024-04-17 14:49:15.633708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:07.054 [2024-04-17 14:49:15.633732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.324 ms 00:30:07.054 [2024-04-17 14:49:15.633748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.314 [2024-04-17 14:49:15.666959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.314 [2024-04-17 14:49:15.667058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:07.314 [2024-04-17 14:49:15.667082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.055 ms 00:30:07.314 [2024-04-17 14:49:15.667100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.314 [2024-04-17 14:49:15.667413] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.314 [2024-04-17 14:49:15.667454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:07.314 [2024-04-17 14:49:15.667472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:30:07.314 [2024-04-17 14:49:15.667510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.314 [2024-04-17 14:49:15.729585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.314 [2024-04-17 14:49:15.729672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:07.314 [2024-04-17 14:49:15.729696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.038 ms 00:30:07.314 [2024-04-17 14:49:15.729712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.314 [2024-04-17 14:49:15.794304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.314 [2024-04-17 14:49:15.794394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:07.314 [2024-04-17 14:49:15.794419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.435 ms 00:30:07.314 [2024-04-17 14:49:15.794434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.314 [2024-04-17 14:49:15.849960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.314 [2024-04-17 14:49:15.850029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:07.314 [2024-04-17 14:49:15.850048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.360 ms 00:30:07.314 [2024-04-17 14:49:15.850075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.314 [2024-04-17 14:49:15.893162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.314 [2024-04-17 14:49:15.893230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:07.314 [2024-04-17 14:49:15.893246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.913 ms 00:30:07.314 [2024-04-17 14:49:15.893272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.314 [2024-04-17 14:49:15.893391] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:07.314 [2024-04-17 14:49:15.893413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.893990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:07.315 [2024-04-17 14:49:15.894231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:07.316 [2024-04-17 14:49:15.894642] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:07.316 [2024-04-17 14:49:15.894652] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b2ea0b9-3b71-4165-8192-0f28abbb9f7b 00:30:07.316 [2024-04-17 14:49:15.894664] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:07.316 [2024-04-17 14:49:15.894675] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:07.316 [2024-04-17 14:49:15.894685] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:07.316 [2024-04-17 14:49:15.894696] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:07.316 [2024-04-17 14:49:15.894706] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:07.316 [2024-04-17 14:49:15.894718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:07.316 [2024-04-17 14:49:15.894729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:07.316 [2024-04-17 14:49:15.894739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:07.316 [2024-04-17 14:49:15.894748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:07.316 [2024-04-17 14:49:15.894759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.316 [2024-04-17 14:49:15.894770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:07.316 [2024-04-17 14:49:15.894782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:30:07.316 [2024-04-17 14:49:15.894796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:15.916830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.575 [2024-04-17 14:49:15.916880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:07.575 [2024-04-17 14:49:15.916895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.008 ms 00:30:07.575 [2024-04-17 14:49:15.916906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:15.917239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.575 [2024-04-17 14:49:15.917257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:07.575 [2024-04-17 14:49:15.917275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:30:07.575 [2024-04-17 14:49:15.917286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:15.977742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:15.977821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:07.575 [2024-04-17 14:49:15.977838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:15.977866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:15.977997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:15.978010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:07.575 [2024-04-17 14:49:15.978028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:15.978040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:15.978106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:15.978121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:07.575 [2024-04-17 14:49:15.978133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:15.978144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:15.978166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:15.978178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:07.575 [2024-04-17 14:49:15.978190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:15.978205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:16.103247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:16.103317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:07.575 [2024-04-17 14:49:16.103336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:16.103348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:16.155309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:16.155374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:07.575 [2024-04-17 14:49:16.155400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:16.155412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:16.155540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:16.155555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:07.575 [2024-04-17 14:49:16.155567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:16.155578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:16.155611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:16.155623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:07.575 [2024-04-17 14:49:16.155635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:16.155646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:16.155772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:16.155786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:07.575 [2024-04-17 14:49:16.155798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:16.155810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:16.155857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:16.155871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:07.575 [2024-04-17 14:49:16.155887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:16.155899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:16.155945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:16.155959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:07.575 [2024-04-17 14:49:16.155971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:16.155982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.575 [2024-04-17 14:49:16.156032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:07.575 [2024-04-17 14:49:16.156049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:07.575 [2024-04-17 14:49:16.156061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:07.575 [2024-04-17 14:49:16.156072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.576 [2024-04-17 14:49:16.156221] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.335 ms, result 0 00:30:09.477 00:30:09.477 00:30:09.477 14:49:17 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:30:09.477 14:49:17 -- ftl/trim.sh@93 -- # svcpid=79497 00:30:09.477 14:49:17 -- ftl/trim.sh@94 -- # waitforlisten 79497 00:30:09.477 14:49:17 -- common/autotest_common.sh@817 -- # '[' -z 79497 ']' 00:30:09.477 14:49:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.477 14:49:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:09.478 14:49:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.478 14:49:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:09.478 14:49:17 -- common/autotest_common.sh@10 -- # set +x 00:30:09.478 [2024-04-17 14:49:17.698873] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:30:09.478 [2024-04-17 14:49:17.698998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79497 ] 00:30:09.478 [2024-04-17 14:49:17.865246] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.736 [2024-04-17 14:49:18.119319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.671 14:49:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:10.671 14:49:19 -- common/autotest_common.sh@850 -- # return 0 00:30:10.671 14:49:19 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:30:10.928 [2024-04-17 14:49:19.396651] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:10.928 [2024-04-17 14:49:19.396727] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:11.187 [2024-04-17 14:49:19.574971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.187 [2024-04-17 14:49:19.575038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:11.187 [2024-04-17 14:49:19.575061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:11.187 [2024-04-17 14:49:19.575072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.187 [2024-04-17 14:49:19.578617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.187 [2024-04-17 14:49:19.578664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:11.188 [2024-04-17 14:49:19.578681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.503 ms 00:30:11.188 [2024-04-17 14:49:19.578701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.188 [2024-04-17 14:49:19.578885] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:11.188 [2024-04-17 14:49:19.580176] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:11.188 [2024-04-17 14:49:19.580218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.188 [2024-04-17 14:49:19.580234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:11.188 [2024-04-17 14:49:19.580252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.359 ms 00:30:11.188 [2024-04-17 14:49:19.580263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.188 [2024-04-17 14:49:19.581917] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:11.188 [2024-04-17 14:49:19.604137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.188 [2024-04-17 14:49:19.604221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:11.188 [2024-04-17 14:49:19.604238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.224 ms 00:30:11.188 [2024-04-17 14:49:19.604254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.188 [2024-04-17 14:49:19.604408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.188 [2024-04-17 14:49:19.604430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:11.188 [2024-04-17 14:49:19.604443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:11.188 [2024-04-17 14:49:19.604457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.188 [2024-04-17 14:49:19.611838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.188 [2024-04-17 14:49:19.611883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:11.188 [2024-04-17 14:49:19.611896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.310 ms 00:30:11.188 [2024-04-17 14:49:19.611909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.188 [2024-04-17 14:49:19.612021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.188 [2024-04-17 14:49:19.612043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:11.188 [2024-04-17 14:49:19.612054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:11.188 [2024-04-17 14:49:19.612066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.188 [2024-04-17 14:49:19.612095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.188 [2024-04-17 14:49:19.612108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:11.188 [2024-04-17 14:49:19.612119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:11.188 [2024-04-17 14:49:19.612131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.188 [2024-04-17 14:49:19.612158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:30:11.188 [2024-04-17 14:49:19.618422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.188 [2024-04-17 14:49:19.618460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:11.188 [2024-04-17 14:49:19.618476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.267 ms 00:30:11.188 [2024-04-17 14:49:19.618509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.188 [2024-04-17 14:49:19.618595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.188 [2024-04-17 14:49:19.618609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:11.188 [2024-04-17 14:49:19.618624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:11.188 [2024-04-17 14:49:19.618636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.188 [2024-04-17 14:49:19.618667] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:11.188 [2024-04-17 14:49:19.618694] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:30:11.188 [2024-04-17 14:49:19.618734] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:11.188 [2024-04-17 14:49:19.618768] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:30:11.188 [2024-04-17 14:49:19.618852] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:30:11.188 [2024-04-17 14:49:19.618867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:11.188 [2024-04-17 14:49:19.618884] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:30:11.188 [2024-04-17 14:49:19.618900] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:11.188 [2024-04-17 14:49:19.618916] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:11.188 [2024-04-17 14:49:19.618929] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:30:11.188 [2024-04-17 14:49:19.618943] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:11.188 [2024-04-17 14:49:19.618954] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:30:11.188 [2024-04-17 14:49:19.618968] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:30:11.188 [2024-04-17 14:49:19.618979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.188 [2024-04-17 14:49:19.618996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:11.188 [2024-04-17 14:49:19.619008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:30:11.188 [2024-04-17 14:49:19.619025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.188 [2024-04-17 14:49:19.619094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.188 [2024-04-17 14:49:19.619109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:11.188 [2024-04-17 14:49:19.619121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:11.188 [2024-04-17 14:49:19.619135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.188 [2024-04-17 14:49:19.619218] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:11.188 [2024-04-17 14:49:19.619235] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:11.188 [2024-04-17 14:49:19.619250] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.188 [2024-04-17 14:49:19.619265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.188 [2024-04-17 14:49:19.619277] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:11.188 [2024-04-17 14:49:19.619291] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:11.188 [2024-04-17 14:49:19.619301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:30:11.188 [2024-04-17 14:49:19.619315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:11.188 [2024-04-17 14:49:19.619326] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:30:11.188 [2024-04-17 14:49:19.619340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.188 [2024-04-17 14:49:19.619350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:11.188 [2024-04-17 14:49:19.619370] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:30:11.188 [2024-04-17 14:49:19.619380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.188 [2024-04-17 14:49:19.619394] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:11.188 [2024-04-17 14:49:19.619404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:30:11.188 [2024-04-17 14:49:19.619418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.188 [2024-04-17 14:49:19.619429] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:11.188 [2024-04-17 14:49:19.619454] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:30:11.188 [2024-04-17 14:49:19.619463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.188 [2024-04-17 14:49:19.619486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:30:11.188 [2024-04-17 14:49:19.619497] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:30:11.188 [2024-04-17 14:49:19.619522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:30:11.188 [2024-04-17 14:49:19.619533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:11.188 [2024-04-17 14:49:19.619545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:30:11.188 [2024-04-17 14:49:19.619555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:11.188 [2024-04-17 14:49:19.619567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:11.188 [2024-04-17 14:49:19.619578] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:30:11.188 [2024-04-17 14:49:19.619594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:11.188 [2024-04-17 14:49:19.619604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:11.188 [2024-04-17 14:49:19.619616] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:30:11.189 [2024-04-17 14:49:19.619626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:11.189 [2024-04-17 14:49:19.619639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:11.189 [2024-04-17 14:49:19.619649] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:30:11.189 [2024-04-17 14:49:19.619661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:11.189 [2024-04-17 14:49:19.619671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:11.189 [2024-04-17 14:49:19.619683] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:30:11.189 [2024-04-17 14:49:19.619693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.189 [2024-04-17 14:49:19.619705] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:11.189 [2024-04-17 14:49:19.619715] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:30:11.189 [2024-04-17 14:49:19.619734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.189 [2024-04-17 14:49:19.619744] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:11.189 [2024-04-17 14:49:19.619758] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:11.189 [2024-04-17 14:49:19.619768] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.189 [2024-04-17 14:49:19.619784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.189 [2024-04-17 14:49:19.619795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:11.189 [2024-04-17 14:49:19.619807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:11.189 [2024-04-17 14:49:19.619817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:11.189 [2024-04-17 14:49:19.619830] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:11.189 [2024-04-17 14:49:19.619840] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:11.189 [2024-04-17 14:49:19.619852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:11.189 [2024-04-17 14:49:19.619863] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:11.189 [2024-04-17 14:49:19.619879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.189 [2024-04-17 14:49:19.619892] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:30:11.189 [2024-04-17 14:49:19.619905] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:30:11.189 [2024-04-17 14:49:19.619916] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:30:11.189 [2024-04-17 14:49:19.619931] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:30:11.189 [2024-04-17 14:49:19.619943] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:30:11.189 [2024-04-17 14:49:19.619956] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:30:11.189 [2024-04-17 14:49:19.619967] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:30:11.189 [2024-04-17 14:49:19.619983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:30:11.189 [2024-04-17 14:49:19.619994] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:30:11.189 [2024-04-17 14:49:19.620008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:30:11.189 [2024-04-17 14:49:19.620019] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:30:11.189 [2024-04-17 14:49:19.620033] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:30:11.189 [2024-04-17 14:49:19.620044] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:30:11.189 [2024-04-17 14:49:19.620058] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:11.189 [2024-04-17 14:49:19.620069] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.189 [2024-04-17 14:49:19.620086] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:11.189 [2024-04-17 14:49:19.620098] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:11.189 [2024-04-17 14:49:19.620111] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:11.189 [2024-04-17 14:49:19.620122] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:11.189 [2024-04-17 14:49:19.620136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.189 [2024-04-17 14:49:19.620150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:11.189 [2024-04-17 14:49:19.620164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:30:11.189 [2024-04-17 14:49:19.620174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.189 [2024-04-17 14:49:19.646689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.189 [2024-04-17 14:49:19.646739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:11.189 [2024-04-17 14:49:19.646758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.456 ms 00:30:11.189 [2024-04-17 14:49:19.646771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.189 [2024-04-17 14:49:19.646929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.189 [2024-04-17 14:49:19.646943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:11.189 [2024-04-17 14:49:19.646958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:11.189 [2024-04-17 14:49:19.646969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.189 [2024-04-17 14:49:19.706688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.189 [2024-04-17 14:49:19.706752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:11.189 [2024-04-17 14:49:19.706778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.690 ms 00:30:11.189 [2024-04-17 14:49:19.706790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.189 [2024-04-17 14:49:19.706900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.189 [2024-04-17 14:49:19.706914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:11.189 [2024-04-17 14:49:19.706929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:11.189 [2024-04-17 14:49:19.706940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.189 [2024-04-17 14:49:19.707400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.189 [2024-04-17 14:49:19.707422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:11.189 [2024-04-17 14:49:19.707438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:30:11.189 [2024-04-17 14:49:19.707452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.189 [2024-04-17 14:49:19.707593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.189 [2024-04-17 14:49:19.707609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:11.189 [2024-04-17 14:49:19.707623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:30:11.189 [2024-04-17 14:49:19.707635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.189 [2024-04-17 14:49:19.734284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.189 [2024-04-17 14:49:19.734339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:11.189 [2024-04-17 14:49:19.734367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.619 ms 00:30:11.189 [2024-04-17 14:49:19.734378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.189 [2024-04-17 14:49:19.755148] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:11.189 [2024-04-17 14:49:19.755206] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:11.189 [2024-04-17 14:49:19.755243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.189 [2024-04-17 14:49:19.755255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:11.189 [2024-04-17 14:49:19.755279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.666 ms 00:30:11.189 [2024-04-17 14:49:19.755290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:19.789284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:19.789371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:11.449 [2024-04-17 14:49:19.789402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.866 ms 00:30:11.449 [2024-04-17 14:49:19.789421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:19.811133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:19.811207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:11.449 [2024-04-17 14:49:19.811227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.561 ms 00:30:11.449 [2024-04-17 14:49:19.811239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:19.832455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:19.832521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:11.449 [2024-04-17 14:49:19.832540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.110 ms 00:30:11.449 [2024-04-17 14:49:19.832550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:19.833102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:19.833137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:11.449 [2024-04-17 14:49:19.833157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:30:11.449 [2024-04-17 14:49:19.833168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:19.937438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:19.937513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:11.449 [2024-04-17 14:49:19.937533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.231 ms 00:30:11.449 [2024-04-17 14:49:19.937544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:19.953333] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:30:11.449 [2024-04-17 14:49:19.970787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:19.970870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:11.449 [2024-04-17 14:49:19.970888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.102 ms 00:30:11.449 [2024-04-17 14:49:19.970906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:19.971024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:19.971042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:11.449 [2024-04-17 14:49:19.971055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:11.449 [2024-04-17 14:49:19.971073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:19.971130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:19.971152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:11.449 [2024-04-17 14:49:19.971165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:11.449 [2024-04-17 14:49:19.971179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:19.973435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:19.973467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:30:11.449 [2024-04-17 14:49:19.973478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.231 ms 00:30:11.449 [2024-04-17 14:49:19.973498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:19.973546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:19.973561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:11.449 [2024-04-17 14:49:19.973572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:11.449 [2024-04-17 14:49:19.973585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:19.973620] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:11.449 [2024-04-17 14:49:19.973637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:19.973651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:11.449 [2024-04-17 14:49:19.973664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:11.449 [2024-04-17 14:49:19.973674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:20.016162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:20.016242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:11.449 [2024-04-17 14:49:20.016262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.439 ms 00:30:11.449 [2024-04-17 14:49:20.016289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:20.016438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.449 [2024-04-17 14:49:20.016453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:11.449 [2024-04-17 14:49:20.016468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:11.449 [2024-04-17 14:49:20.016479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.449 [2024-04-17 14:49:20.017644] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:11.449 [2024-04-17 14:49:20.024007] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 442.300 ms, result 0 00:30:11.449 [2024-04-17 14:49:20.025029] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:11.707 Some configs were skipped because the RPC state that can call them passed over. 00:30:11.707 14:49:20 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:30:11.966 [2024-04-17 14:49:20.367016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.966 [2024-04-17 14:49:20.367087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:30:11.966 [2024-04-17 14:49:20.367105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.674 ms 00:30:11.966 [2024-04-17 14:49:20.367120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.966 [2024-04-17 14:49:20.367165] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 43.829 ms, result 0 00:30:11.966 true 00:30:11.966 14:49:20 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:30:12.224 [2024-04-17 14:49:20.612599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.224 [2024-04-17 14:49:20.612656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:30:12.224 [2024-04-17 14:49:20.612675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.188 ms 00:30:12.224 [2024-04-17 14:49:20.612685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.224 [2024-04-17 14:49:20.612725] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 41.325 ms, result 0 00:30:12.224 true 00:30:12.224 14:49:20 -- ftl/trim.sh@102 -- # killprocess 79497 00:30:12.224 14:49:20 -- common/autotest_common.sh@936 -- # '[' -z 79497 ']' 00:30:12.224 14:49:20 -- common/autotest_common.sh@940 -- # kill -0 79497 00:30:12.224 14:49:20 -- common/autotest_common.sh@941 -- # uname 00:30:12.224 14:49:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:12.224 14:49:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79497 00:30:12.224 killing process with pid 79497 00:30:12.225 14:49:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:12.225 14:49:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:12.225 14:49:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79497' 00:30:12.225 14:49:20 -- common/autotest_common.sh@955 -- # kill 79497 00:30:12.225 14:49:20 -- common/autotest_common.sh@960 -- # wait 79497 00:30:13.602 [2024-04-17 14:49:21.876972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.877034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:13.602 [2024-04-17 14:49:21.877067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:13.602 [2024-04-17 14:49:21.877082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.877106] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:30:13.602 [2024-04-17 14:49:21.881212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.881246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:13.602 [2024-04-17 14:49:21.881260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:30:13.602 [2024-04-17 14:49:21.881273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.881524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.881540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:13.602 [2024-04-17 14:49:21.881572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:30:13.602 [2024-04-17 14:49:21.881595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.885485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.885551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:13.602 [2024-04-17 14:49:21.885568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.864 ms 00:30:13.602 [2024-04-17 14:49:21.885579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.891696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.891743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:30:13.602 [2024-04-17 14:49:21.891761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.072 ms 00:30:13.602 [2024-04-17 14:49:21.891771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.908350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.908388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:13.602 [2024-04-17 14:49:21.908404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.524 ms 00:30:13.602 [2024-04-17 14:49:21.908414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.920151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.920211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:13.602 [2024-04-17 14:49:21.920251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.672 ms 00:30:13.602 [2024-04-17 14:49:21.920262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.920436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.920451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:13.602 [2024-04-17 14:49:21.920469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:30:13.602 [2024-04-17 14:49:21.920480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.938788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.938857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:13.602 [2024-04-17 14:49:21.938877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.259 ms 00:30:13.602 [2024-04-17 14:49:21.938888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.955881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.955938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:13.602 [2024-04-17 14:49:21.955955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.913 ms 00:30:13.602 [2024-04-17 14:49:21.955965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.971975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.972016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:13.602 [2024-04-17 14:49:21.972036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.950 ms 00:30:13.602 [2024-04-17 14:49:21.972045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.987874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.602 [2024-04-17 14:49:21.987912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:13.602 [2024-04-17 14:49:21.987928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.756 ms 00:30:13.602 [2024-04-17 14:49:21.987938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.602 [2024-04-17 14:49:21.987976] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:13.602 [2024-04-17 14:49:21.987994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:13.602 [2024-04-17 14:49:21.988313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.988995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:13.603 [2024-04-17 14:49:21.989166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:13.604 [2024-04-17 14:49:21.989176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:13.604 [2024-04-17 14:49:21.989189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:13.604 [2024-04-17 14:49:21.989200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:13.604 [2024-04-17 14:49:21.989213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:13.604 [2024-04-17 14:49:21.989230] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:13.604 [2024-04-17 14:49:21.989242] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b2ea0b9-3b71-4165-8192-0f28abbb9f7b 00:30:13.604 [2024-04-17 14:49:21.989253] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:13.604 [2024-04-17 14:49:21.989265] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:13.604 [2024-04-17 14:49:21.989274] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:13.604 [2024-04-17 14:49:21.989290] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:13.604 [2024-04-17 14:49:21.989299] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:13.604 [2024-04-17 14:49:21.989315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:13.604 [2024-04-17 14:49:21.989325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:13.604 [2024-04-17 14:49:21.989336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:13.604 [2024-04-17 14:49:21.989345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:13.604 [2024-04-17 14:49:21.989357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.604 [2024-04-17 14:49:21.989367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:13.604 [2024-04-17 14:49:21.989380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:30:13.604 [2024-04-17 14:49:21.989390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.604 [2024-04-17 14:49:22.010132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.604 [2024-04-17 14:49:22.010170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:13.604 [2024-04-17 14:49:22.010186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.704 ms 00:30:13.604 [2024-04-17 14:49:22.010199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.604 [2024-04-17 14:49:22.010520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.604 [2024-04-17 14:49:22.010550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:13.604 [2024-04-17 14:49:22.010568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:30:13.604 [2024-04-17 14:49:22.010579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.604 [2024-04-17 14:49:22.081227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.604 [2024-04-17 14:49:22.081274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:13.604 [2024-04-17 14:49:22.081294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.604 [2024-04-17 14:49:22.081304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.604 [2024-04-17 14:49:22.081407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.604 [2024-04-17 14:49:22.081419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:13.604 [2024-04-17 14:49:22.081432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.604 [2024-04-17 14:49:22.081442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.604 [2024-04-17 14:49:22.081505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.604 [2024-04-17 14:49:22.081518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:13.604 [2024-04-17 14:49:22.081531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.604 [2024-04-17 14:49:22.081544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.604 [2024-04-17 14:49:22.081567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.604 [2024-04-17 14:49:22.081577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:13.604 [2024-04-17 14:49:22.081592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.604 [2024-04-17 14:49:22.081602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.863 [2024-04-17 14:49:22.214210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.863 [2024-04-17 14:49:22.214272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:13.863 [2024-04-17 14:49:22.214289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.864 [2024-04-17 14:49:22.214304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.864 [2024-04-17 14:49:22.264151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.864 [2024-04-17 14:49:22.264210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:13.864 [2024-04-17 14:49:22.264227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.864 [2024-04-17 14:49:22.264238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.864 [2024-04-17 14:49:22.264327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.864 [2024-04-17 14:49:22.264340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:13.864 [2024-04-17 14:49:22.264353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.864 [2024-04-17 14:49:22.264363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.864 [2024-04-17 14:49:22.264398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.864 [2024-04-17 14:49:22.264409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:13.864 [2024-04-17 14:49:22.264424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.864 [2024-04-17 14:49:22.264434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.864 [2024-04-17 14:49:22.264559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.864 [2024-04-17 14:49:22.264589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:13.864 [2024-04-17 14:49:22.264603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.864 [2024-04-17 14:49:22.264614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.864 [2024-04-17 14:49:22.264658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.864 [2024-04-17 14:49:22.264678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:13.864 [2024-04-17 14:49:22.264692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.864 [2024-04-17 14:49:22.264703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.864 [2024-04-17 14:49:22.264744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.864 [2024-04-17 14:49:22.264756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:13.864 [2024-04-17 14:49:22.264769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.864 [2024-04-17 14:49:22.264780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.864 [2024-04-17 14:49:22.264831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:13.864 [2024-04-17 14:49:22.264843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:13.864 [2024-04-17 14:49:22.264859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:13.864 [2024-04-17 14:49:22.264870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.864 [2024-04-17 14:49:22.265010] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 388.013 ms, result 0 00:30:15.239 14:49:23 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:15.239 [2024-04-17 14:49:23.782454] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:30:15.239 [2024-04-17 14:49:23.783457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79572 ] 00:30:15.499 [2024-04-17 14:49:23.956655] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.758 [2024-04-17 14:49:24.211054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.326 [2024-04-17 14:49:24.655158] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:16.326 [2024-04-17 14:49:24.655234] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:16.326 [2024-04-17 14:49:24.817854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.326 [2024-04-17 14:49:24.817911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:16.326 [2024-04-17 14:49:24.817933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:16.326 [2024-04-17 14:49:24.817945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.326 [2024-04-17 14:49:24.821629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.326 [2024-04-17 14:49:24.821673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:16.326 [2024-04-17 14:49:24.821687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.656 ms 00:30:16.326 [2024-04-17 14:49:24.821702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.326 [2024-04-17 14:49:24.821838] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:16.326 [2024-04-17 14:49:24.823141] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:16.326 [2024-04-17 14:49:24.823174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.326 [2024-04-17 14:49:24.823191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:16.326 [2024-04-17 14:49:24.823204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:30:16.326 [2024-04-17 14:49:24.823216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.326 [2024-04-17 14:49:24.824960] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:16.326 [2024-04-17 14:49:24.850543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.326 [2024-04-17 14:49:24.850602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:16.326 [2024-04-17 14:49:24.850621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.582 ms 00:30:16.326 [2024-04-17 14:49:24.850634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.326 [2024-04-17 14:49:24.850792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.326 [2024-04-17 14:49:24.850809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:16.326 [2024-04-17 14:49:24.850827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:16.326 [2024-04-17 14:49:24.850840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.326 [2024-04-17 14:49:24.858598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.326 [2024-04-17 14:49:24.858643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:16.326 [2024-04-17 14:49:24.858659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.697 ms 00:30:16.326 [2024-04-17 14:49:24.858671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.326 [2024-04-17 14:49:24.858830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.326 [2024-04-17 14:49:24.858850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:16.326 [2024-04-17 14:49:24.858866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:30:16.326 [2024-04-17 14:49:24.858878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.326 [2024-04-17 14:49:24.858913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.326 [2024-04-17 14:49:24.858925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:16.326 [2024-04-17 14:49:24.858938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:16.326 [2024-04-17 14:49:24.858949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.326 [2024-04-17 14:49:24.858980] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:30:16.326 [2024-04-17 14:49:24.865218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.326 [2024-04-17 14:49:24.865264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:16.326 [2024-04-17 14:49:24.865279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.246 ms 00:30:16.326 [2024-04-17 14:49:24.865292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.326 [2024-04-17 14:49:24.865393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.326 [2024-04-17 14:49:24.865406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:16.326 [2024-04-17 14:49:24.865419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:16.326 [2024-04-17 14:49:24.865430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.326 [2024-04-17 14:49:24.865458] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:16.326 [2024-04-17 14:49:24.865485] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:30:16.326 [2024-04-17 14:49:24.865543] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:16.326 [2024-04-17 14:49:24.865566] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:30:16.326 [2024-04-17 14:49:24.865648] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:30:16.326 [2024-04-17 14:49:24.865664] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:16.326 [2024-04-17 14:49:24.865678] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:30:16.326 [2024-04-17 14:49:24.865693] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:16.326 [2024-04-17 14:49:24.865707] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:16.326 [2024-04-17 14:49:24.865721] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:30:16.326 [2024-04-17 14:49:24.865732] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:16.326 [2024-04-17 14:49:24.865743] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:30:16.326 [2024-04-17 14:49:24.865754] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:30:16.326 [2024-04-17 14:49:24.865766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.326 [2024-04-17 14:49:24.865781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:16.327 [2024-04-17 14:49:24.865798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:30:16.327 [2024-04-17 14:49:24.865809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.327 [2024-04-17 14:49:24.865880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.327 [2024-04-17 14:49:24.865893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:16.327 [2024-04-17 14:49:24.865905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:16.327 [2024-04-17 14:49:24.865915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.327 [2024-04-17 14:49:24.865997] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:16.327 [2024-04-17 14:49:24.866011] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:16.327 [2024-04-17 14:49:24.866027] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:16.327 [2024-04-17 14:49:24.866039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.327 [2024-04-17 14:49:24.866050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:16.327 [2024-04-17 14:49:24.866061] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:16.327 [2024-04-17 14:49:24.866072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:30:16.327 [2024-04-17 14:49:24.866084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:16.327 [2024-04-17 14:49:24.866096] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:30:16.327 [2024-04-17 14:49:24.866106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:16.327 [2024-04-17 14:49:24.866130] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:16.327 [2024-04-17 14:49:24.866141] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:30:16.327 [2024-04-17 14:49:24.866152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:16.327 [2024-04-17 14:49:24.866163] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:16.327 [2024-04-17 14:49:24.866174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:30:16.327 [2024-04-17 14:49:24.866184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.327 [2024-04-17 14:49:24.866195] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:16.327 [2024-04-17 14:49:24.866205] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:30:16.327 [2024-04-17 14:49:24.866216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.327 [2024-04-17 14:49:24.866227] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:30:16.327 [2024-04-17 14:49:24.866238] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:30:16.327 [2024-04-17 14:49:24.866248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:30:16.327 [2024-04-17 14:49:24.866260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:16.327 [2024-04-17 14:49:24.866270] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:30:16.327 [2024-04-17 14:49:24.866282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:16.327 [2024-04-17 14:49:24.866292] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:16.327 [2024-04-17 14:49:24.866303] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:30:16.327 [2024-04-17 14:49:24.866313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:16.327 [2024-04-17 14:49:24.866324] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:16.327 [2024-04-17 14:49:24.866335] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:30:16.327 [2024-04-17 14:49:24.866345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:16.327 [2024-04-17 14:49:24.866368] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:16.327 [2024-04-17 14:49:24.866380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:30:16.327 [2024-04-17 14:49:24.866391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:16.327 [2024-04-17 14:49:24.866401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:16.327 [2024-04-17 14:49:24.866411] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:30:16.327 [2024-04-17 14:49:24.866422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:16.327 [2024-04-17 14:49:24.866432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:16.327 [2024-04-17 14:49:24.866443] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:30:16.327 [2024-04-17 14:49:24.866453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:16.327 [2024-04-17 14:49:24.866464] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:16.327 [2024-04-17 14:49:24.866475] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:16.327 [2024-04-17 14:49:24.866486] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:16.327 [2024-04-17 14:49:24.866511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.327 [2024-04-17 14:49:24.866523] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:16.327 [2024-04-17 14:49:24.866534] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:16.327 [2024-04-17 14:49:24.866545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:16.327 [2024-04-17 14:49:24.866556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:16.327 [2024-04-17 14:49:24.866566] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:16.327 [2024-04-17 14:49:24.866577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:16.327 [2024-04-17 14:49:24.866589] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:16.327 [2024-04-17 14:49:24.866603] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.327 [2024-04-17 14:49:24.866616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:30:16.327 [2024-04-17 14:49:24.866628] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:30:16.327 [2024-04-17 14:49:24.866641] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:30:16.327 [2024-04-17 14:49:24.866654] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:30:16.327 [2024-04-17 14:49:24.866666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:30:16.327 [2024-04-17 14:49:24.866678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:30:16.327 [2024-04-17 14:49:24.866690] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:30:16.327 [2024-04-17 14:49:24.866703] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:30:16.327 [2024-04-17 14:49:24.866715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:30:16.327 [2024-04-17 14:49:24.866727] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:30:16.327 [2024-04-17 14:49:24.866739] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:30:16.327 [2024-04-17 14:49:24.866751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:30:16.327 [2024-04-17 14:49:24.866763] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:30:16.327 [2024-04-17 14:49:24.866775] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:16.328 [2024-04-17 14:49:24.866788] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.328 [2024-04-17 14:49:24.866801] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:16.328 [2024-04-17 14:49:24.866813] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:16.328 [2024-04-17 14:49:24.866825] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:16.328 [2024-04-17 14:49:24.866838] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:16.328 [2024-04-17 14:49:24.866850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-04-17 14:49:24.866866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:16.328 [2024-04-17 14:49:24.866877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:30:16.328 [2024-04-17 14:49:24.866890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-04-17 14:49:24.895167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-04-17 14:49:24.895218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:16.328 [2024-04-17 14:49:24.895235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.218 ms 00:30:16.328 [2024-04-17 14:49:24.895246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.328 [2024-04-17 14:49:24.895402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.328 [2024-04-17 14:49:24.895415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:16.328 [2024-04-17 14:49:24.895426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:16.328 [2024-04-17 14:49:24.895437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.587 [2024-04-17 14:49:24.964084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.587 [2024-04-17 14:49:24.964133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:16.587 [2024-04-17 14:49:24.964148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.620 ms 00:30:16.587 [2024-04-17 14:49:24.964159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.587 [2024-04-17 14:49:24.964264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.587 [2024-04-17 14:49:24.964277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:16.587 [2024-04-17 14:49:24.964288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:16.587 [2024-04-17 14:49:24.964298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.587 [2024-04-17 14:49:24.964777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.587 [2024-04-17 14:49:24.964790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:16.587 [2024-04-17 14:49:24.964802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:30:16.587 [2024-04-17 14:49:24.964811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.587 [2024-04-17 14:49:24.964928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.587 [2024-04-17 14:49:24.964941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:16.587 [2024-04-17 14:49:24.964952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:30:16.587 [2024-04-17 14:49:24.964961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.588 [2024-04-17 14:49:24.989827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.588 [2024-04-17 14:49:24.989873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:16.588 [2024-04-17 14:49:24.989890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.840 ms 00:30:16.588 [2024-04-17 14:49:24.989901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.588 [2024-04-17 14:49:25.013274] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:16.588 [2024-04-17 14:49:25.013336] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:16.588 [2024-04-17 14:49:25.013370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.588 [2024-04-17 14:49:25.013382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:16.588 [2024-04-17 14:49:25.013397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.311 ms 00:30:16.588 [2024-04-17 14:49:25.013408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.588 [2024-04-17 14:49:25.047242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.588 [2024-04-17 14:49:25.047312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:16.588 [2024-04-17 14:49:25.047340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.702 ms 00:30:16.588 [2024-04-17 14:49:25.047352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.588 [2024-04-17 14:49:25.067940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.588 [2024-04-17 14:49:25.068002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:16.588 [2024-04-17 14:49:25.068017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.413 ms 00:30:16.588 [2024-04-17 14:49:25.068027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.588 [2024-04-17 14:49:25.087529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.588 [2024-04-17 14:49:25.087576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:16.588 [2024-04-17 14:49:25.087608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.370 ms 00:30:16.588 [2024-04-17 14:49:25.087618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.588 [2024-04-17 14:49:25.088119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.588 [2024-04-17 14:49:25.088134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:16.588 [2024-04-17 14:49:25.088146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:30:16.588 [2024-04-17 14:49:25.088156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.847 [2024-04-17 14:49:25.191678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.847 [2024-04-17 14:49:25.191737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:16.847 [2024-04-17 14:49:25.191755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.492 ms 00:30:16.847 [2024-04-17 14:49:25.191767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.847 [2024-04-17 14:49:25.207235] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:30:16.847 [2024-04-17 14:49:25.225969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.847 [2024-04-17 14:49:25.226029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:16.847 [2024-04-17 14:49:25.226047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.056 ms 00:30:16.847 [2024-04-17 14:49:25.226062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.847 [2024-04-17 14:49:25.226192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.847 [2024-04-17 14:49:25.226211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:16.847 [2024-04-17 14:49:25.226226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:16.847 [2024-04-17 14:49:25.226240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.847 [2024-04-17 14:49:25.226302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.847 [2024-04-17 14:49:25.226322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:16.847 [2024-04-17 14:49:25.226346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:16.847 [2024-04-17 14:49:25.226373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.847 [2024-04-17 14:49:25.228635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.847 [2024-04-17 14:49:25.228666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:30:16.847 [2024-04-17 14:49:25.228679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.232 ms 00:30:16.847 [2024-04-17 14:49:25.228690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.847 [2024-04-17 14:49:25.228724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.847 [2024-04-17 14:49:25.228736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:16.847 [2024-04-17 14:49:25.228751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:16.847 [2024-04-17 14:49:25.228762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.847 [2024-04-17 14:49:25.228797] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:16.847 [2024-04-17 14:49:25.228809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.847 [2024-04-17 14:49:25.228820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:16.847 [2024-04-17 14:49:25.228831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:16.847 [2024-04-17 14:49:25.228842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.847 [2024-04-17 14:49:25.272340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.847 [2024-04-17 14:49:25.272419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:16.847 [2024-04-17 14:49:25.272436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.469 ms 00:30:16.847 [2024-04-17 14:49:25.272449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.847 [2024-04-17 14:49:25.272644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.847 [2024-04-17 14:49:25.272659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:16.847 [2024-04-17 14:49:25.272671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:16.847 [2024-04-17 14:49:25.272682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.847 [2024-04-17 14:49:25.273727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:16.847 [2024-04-17 14:49:25.280395] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 455.541 ms, result 0 00:30:16.847 [2024-04-17 14:49:25.281351] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:16.847 [2024-04-17 14:49:25.302616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:25.042  Copying: 34/256 [MB] (34 MBps) Copying: 65/256 [MB] (31 MBps) Copying: 97/256 [MB] (31 MBps) Copying: 129/256 [MB] (32 MBps) Copying: 162/256 [MB] (32 MBps) Copying: 195/256 [MB] (33 MBps) Copying: 229/256 [MB] (33 MBps) Copying: 256/256 [MB] (average 32 MBps)[2024-04-17 14:49:33.493910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:25.042 [2024-04-17 14:49:33.519562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.042 [2024-04-17 14:49:33.519650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:25.042 [2024-04-17 14:49:33.519682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:25.042 [2024-04-17 14:49:33.519705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.042 [2024-04-17 14:49:33.519760] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:30:25.042 [2024-04-17 14:49:33.525687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.042 [2024-04-17 14:49:33.525767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:25.042 [2024-04-17 14:49:33.525797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.891 ms 00:30:25.042 [2024-04-17 14:49:33.525819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.042 [2024-04-17 14:49:33.526303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.042 [2024-04-17 14:49:33.526352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:25.042 [2024-04-17 14:49:33.526395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:30:25.042 [2024-04-17 14:49:33.526418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.042 [2024-04-17 14:49:33.531560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.042 [2024-04-17 14:49:33.531617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:25.042 [2024-04-17 14:49:33.531642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.095 ms 00:30:25.042 [2024-04-17 14:49:33.531673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.042 [2024-04-17 14:49:33.542395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.042 [2024-04-17 14:49:33.542514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:30:25.042 [2024-04-17 14:49:33.542548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.615 ms 00:30:25.042 [2024-04-17 14:49:33.542573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.042 [2024-04-17 14:49:33.601108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.042 [2024-04-17 14:49:33.601189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:25.042 [2024-04-17 14:49:33.601212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.394 ms 00:30:25.042 [2024-04-17 14:49:33.601228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.042 [2024-04-17 14:49:33.636617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.042 [2024-04-17 14:49:33.636703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:25.042 [2024-04-17 14:49:33.636727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.245 ms 00:30:25.042 [2024-04-17 14:49:33.636744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.042 [2024-04-17 14:49:33.637020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.042 [2024-04-17 14:49:33.637059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:25.042 [2024-04-17 14:49:33.637076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:30:25.042 [2024-04-17 14:49:33.637091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.302 [2024-04-17 14:49:33.700354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.302 [2024-04-17 14:49:33.700450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:25.302 [2024-04-17 14:49:33.700473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.229 ms 00:30:25.302 [2024-04-17 14:49:33.700501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.302 [2024-04-17 14:49:33.764957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.302 [2024-04-17 14:49:33.765038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:25.302 [2024-04-17 14:49:33.765062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.289 ms 00:30:25.302 [2024-04-17 14:49:33.765078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.302 [2024-04-17 14:49:33.827668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.302 [2024-04-17 14:49:33.827775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:25.302 [2024-04-17 14:49:33.827799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.433 ms 00:30:25.302 [2024-04-17 14:49:33.827815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.302 [2024-04-17 14:49:33.875342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.302 [2024-04-17 14:49:33.875425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:25.302 [2024-04-17 14:49:33.875454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.311 ms 00:30:25.302 [2024-04-17 14:49:33.875466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.302 [2024-04-17 14:49:33.875596] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:25.302 [2024-04-17 14:49:33.875631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:25.302 [2024-04-17 14:49:33.875926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.875937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.875949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.875960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.875972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.875983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.875996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:25.303 [2024-04-17 14:49:33.876882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:25.304 [2024-04-17 14:49:33.876894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:25.304 [2024-04-17 14:49:33.876913] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:25.304 [2024-04-17 14:49:33.876924] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b2ea0b9-3b71-4165-8192-0f28abbb9f7b 00:30:25.304 [2024-04-17 14:49:33.876936] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:25.304 [2024-04-17 14:49:33.876947] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:25.304 [2024-04-17 14:49:33.876958] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:25.304 [2024-04-17 14:49:33.876969] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:25.304 [2024-04-17 14:49:33.876979] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:25.304 [2024-04-17 14:49:33.876991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:25.304 [2024-04-17 14:49:33.877002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:25.304 [2024-04-17 14:49:33.877012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:25.304 [2024-04-17 14:49:33.877022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:25.304 [2024-04-17 14:49:33.877033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.304 [2024-04-17 14:49:33.877045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:25.304 [2024-04-17 14:49:33.877057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.439 ms 00:30:25.304 [2024-04-17 14:49:33.877072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.304 [2024-04-17 14:49:33.899702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.304 [2024-04-17 14:49:33.899764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:25.304 [2024-04-17 14:49:33.899780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.602 ms 00:30:25.304 [2024-04-17 14:49:33.899809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.304 [2024-04-17 14:49:33.900227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:25.304 [2024-04-17 14:49:33.900245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:25.304 [2024-04-17 14:49:33.900268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:30:25.304 [2024-04-17 14:49:33.900279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.563 [2024-04-17 14:49:33.966315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.563 [2024-04-17 14:49:33.966411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:25.563 [2024-04-17 14:49:33.966433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.563 [2024-04-17 14:49:33.966446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.563 [2024-04-17 14:49:33.966582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.563 [2024-04-17 14:49:33.966598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:25.563 [2024-04-17 14:49:33.966616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.563 [2024-04-17 14:49:33.966628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.563 [2024-04-17 14:49:33.966693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.563 [2024-04-17 14:49:33.966707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:25.563 [2024-04-17 14:49:33.966719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.563 [2024-04-17 14:49:33.966731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.563 [2024-04-17 14:49:33.966752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.563 [2024-04-17 14:49:33.966764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:25.563 [2024-04-17 14:49:33.966776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.563 [2024-04-17 14:49:33.966800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.563 [2024-04-17 14:49:34.105486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.563 [2024-04-17 14:49:34.105572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:25.563 [2024-04-17 14:49:34.105590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.563 [2024-04-17 14:49:34.105602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.563 [2024-04-17 14:49:34.157306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.563 [2024-04-17 14:49:34.157373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:25.563 [2024-04-17 14:49:34.157415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.563 [2024-04-17 14:49:34.157427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.563 [2024-04-17 14:49:34.157541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.563 [2024-04-17 14:49:34.157556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:25.563 [2024-04-17 14:49:34.157569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.563 [2024-04-17 14:49:34.157580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.563 [2024-04-17 14:49:34.157612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.563 [2024-04-17 14:49:34.157624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:25.563 [2024-04-17 14:49:34.157637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.563 [2024-04-17 14:49:34.157648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.563 [2024-04-17 14:49:34.157768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.563 [2024-04-17 14:49:34.157783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:25.564 [2024-04-17 14:49:34.157796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.564 [2024-04-17 14:49:34.157807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.564 [2024-04-17 14:49:34.157848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.564 [2024-04-17 14:49:34.157863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:25.564 [2024-04-17 14:49:34.157879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.564 [2024-04-17 14:49:34.157891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.564 [2024-04-17 14:49:34.157937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.564 [2024-04-17 14:49:34.157950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:25.564 [2024-04-17 14:49:34.157961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.564 [2024-04-17 14:49:34.157972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.564 [2024-04-17 14:49:34.158022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:25.564 [2024-04-17 14:49:34.158035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:25.564 [2024-04-17 14:49:34.158047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:25.564 [2024-04-17 14:49:34.158058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:25.564 [2024-04-17 14:49:34.158213] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 638.752 ms, result 0 00:30:27.464 00:30:27.464 00:30:27.464 14:49:35 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:28.030 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:30:28.030 14:49:36 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:30:28.030 14:49:36 -- ftl/trim.sh@109 -- # fio_kill 00:30:28.030 14:49:36 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:28.030 14:49:36 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:28.030 14:49:36 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:30:28.030 14:49:36 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:30:28.030 14:49:36 -- ftl/trim.sh@20 -- # killprocess 79497 00:30:28.030 14:49:36 -- common/autotest_common.sh@936 -- # '[' -z 79497 ']' 00:30:28.030 14:49:36 -- common/autotest_common.sh@940 -- # kill -0 79497 00:30:28.030 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79497) - No such process 00:30:28.030 14:49:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79497 is not found' 00:30:28.030 Process with pid 79497 is not found 00:30:28.030 00:30:28.030 real 1m9.994s 00:30:28.030 user 1m34.591s 00:30:28.030 sys 0m7.222s 00:30:28.030 14:49:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:28.030 14:49:36 -- common/autotest_common.sh@10 -- # set +x 00:30:28.030 ************************************ 00:30:28.030 END TEST ftl_trim 00:30:28.030 ************************************ 00:30:28.030 14:49:36 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:30:28.031 14:49:36 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:30:28.031 14:49:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:28.031 14:49:36 -- common/autotest_common.sh@10 -- # set +x 00:30:28.289 ************************************ 00:30:28.290 START TEST ftl_restore 00:30:28.290 ************************************ 00:30:28.290 14:49:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:30:28.290 * Looking for test storage... 00:30:28.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:28.290 14:49:36 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:28.290 14:49:36 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:28.290 14:49:36 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:28.290 14:49:36 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:28.290 14:49:36 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:28.290 14:49:36 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:28.290 14:49:36 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:28.290 14:49:36 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:28.290 14:49:36 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:28.290 14:49:36 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:28.290 14:49:36 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:28.290 14:49:36 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:28.290 14:49:36 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:28.290 14:49:36 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:28.290 14:49:36 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:28.290 14:49:36 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:28.290 14:49:36 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:28.290 14:49:36 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:28.290 14:49:36 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:28.290 14:49:36 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:28.290 14:49:36 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:28.290 14:49:36 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:28.290 14:49:36 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:28.290 14:49:36 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:28.290 14:49:36 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:28.290 14:49:36 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:28.290 14:49:36 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:28.290 14:49:36 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:28.290 14:49:36 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:28.290 14:49:36 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:28.290 14:49:36 -- ftl/restore.sh@13 -- # mktemp -d 00:30:28.290 14:49:36 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.98TZEbI86L 00:30:28.290 14:49:36 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:28.290 14:49:36 -- ftl/restore.sh@16 -- # case $opt in 00:30:28.290 14:49:36 -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:28.290 14:49:36 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:28.290 14:49:36 -- ftl/restore.sh@23 -- # shift 2 00:30:28.290 14:49:36 -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:28.290 14:49:36 -- ftl/restore.sh@25 -- # timeout=240 00:30:28.290 14:49:36 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:28.290 14:49:36 -- ftl/restore.sh@39 -- # svcpid=79769 00:30:28.290 14:49:36 -- ftl/restore.sh@41 -- # waitforlisten 79769 00:30:28.290 14:49:36 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:28.290 14:49:36 -- common/autotest_common.sh@817 -- # '[' -z 79769 ']' 00:30:28.290 14:49:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.290 14:49:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:28.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.290 14:49:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.290 14:49:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:28.290 14:49:36 -- common/autotest_common.sh@10 -- # set +x 00:30:28.549 [2024-04-17 14:49:36.918863] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:30:28.549 [2024-04-17 14:49:36.919026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79769 ] 00:30:28.549 [2024-04-17 14:49:37.107646] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.115 [2024-04-17 14:49:37.423189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.079 14:49:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:30.079 14:49:38 -- common/autotest_common.sh@850 -- # return 0 00:30:30.079 14:49:38 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:30.079 14:49:38 -- ftl/common.sh@54 -- # local name=nvme0 00:30:30.079 14:49:38 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:30.079 14:49:38 -- ftl/common.sh@56 -- # local size=103424 00:30:30.079 14:49:38 -- ftl/common.sh@59 -- # local base_bdev 00:30:30.079 14:49:38 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:30.337 14:49:38 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:30.337 14:49:38 -- ftl/common.sh@62 -- # local base_size 00:30:30.337 14:49:38 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:30.337 14:49:38 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:30:30.337 14:49:38 -- common/autotest_common.sh@1365 -- # local bdev_info 00:30:30.337 14:49:38 -- common/autotest_common.sh@1366 -- # local bs 00:30:30.337 14:49:38 -- common/autotest_common.sh@1367 -- # local nb 00:30:30.337 14:49:38 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:30.595 14:49:39 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:30:30.595 { 00:30:30.595 "name": "nvme0n1", 00:30:30.595 "aliases": [ 00:30:30.595 "1702a26c-26b9-4881-8ced-2f3c5229fdee" 00:30:30.595 ], 00:30:30.595 "product_name": "NVMe disk", 00:30:30.595 "block_size": 4096, 00:30:30.595 "num_blocks": 1310720, 00:30:30.595 "uuid": "1702a26c-26b9-4881-8ced-2f3c5229fdee", 00:30:30.595 "assigned_rate_limits": { 00:30:30.595 "rw_ios_per_sec": 0, 00:30:30.595 "rw_mbytes_per_sec": 0, 00:30:30.595 "r_mbytes_per_sec": 0, 00:30:30.595 "w_mbytes_per_sec": 0 00:30:30.595 }, 00:30:30.595 "claimed": true, 00:30:30.595 "claim_type": "read_many_write_one", 00:30:30.595 "zoned": false, 00:30:30.595 "supported_io_types": { 00:30:30.595 "read": true, 00:30:30.595 "write": true, 00:30:30.595 "unmap": true, 00:30:30.595 "write_zeroes": true, 00:30:30.595 "flush": true, 00:30:30.595 "reset": true, 00:30:30.595 "compare": true, 00:30:30.595 "compare_and_write": false, 00:30:30.595 "abort": true, 00:30:30.595 "nvme_admin": true, 00:30:30.595 "nvme_io": true 00:30:30.595 }, 00:30:30.595 "driver_specific": { 00:30:30.595 "nvme": [ 00:30:30.595 { 00:30:30.595 "pci_address": "0000:00:11.0", 00:30:30.595 "trid": { 00:30:30.595 "trtype": "PCIe", 00:30:30.595 "traddr": "0000:00:11.0" 00:30:30.595 }, 00:30:30.595 "ctrlr_data": { 00:30:30.595 "cntlid": 0, 00:30:30.595 "vendor_id": "0x1b36", 00:30:30.595 "model_number": "QEMU NVMe Ctrl", 00:30:30.595 "serial_number": "12341", 00:30:30.595 "firmware_revision": "8.0.0", 00:30:30.595 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:30.595 "oacs": { 00:30:30.595 "security": 0, 00:30:30.595 "format": 1, 00:30:30.595 "firmware": 0, 00:30:30.595 "ns_manage": 1 00:30:30.595 }, 00:30:30.595 "multi_ctrlr": false, 00:30:30.595 "ana_reporting": false 00:30:30.595 }, 00:30:30.595 "vs": { 00:30:30.595 "nvme_version": "1.4" 00:30:30.595 }, 00:30:30.595 "ns_data": { 00:30:30.595 "id": 1, 00:30:30.595 "can_share": false 00:30:30.595 } 00:30:30.595 } 00:30:30.595 ], 00:30:30.595 "mp_policy": "active_passive" 00:30:30.595 } 00:30:30.595 } 00:30:30.595 ]' 00:30:30.595 14:49:39 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:30:30.595 14:49:39 -- common/autotest_common.sh@1369 -- # bs=4096 00:30:30.595 14:49:39 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:30:30.853 14:49:39 -- common/autotest_common.sh@1370 -- # nb=1310720 00:30:30.853 14:49:39 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:30:30.853 14:49:39 -- common/autotest_common.sh@1374 -- # echo 5120 00:30:30.853 14:49:39 -- ftl/common.sh@63 -- # base_size=5120 00:30:30.853 14:49:39 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:30.853 14:49:39 -- ftl/common.sh@67 -- # clear_lvols 00:30:30.853 14:49:39 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:30.853 14:49:39 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:31.110 14:49:39 -- ftl/common.sh@28 -- # stores=8dae4429-d930-4c54-bd92-f46b8049ad7c 00:30:31.110 14:49:39 -- ftl/common.sh@29 -- # for lvs in $stores 00:30:31.110 14:49:39 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8dae4429-d930-4c54-bd92-f46b8049ad7c 00:30:31.368 14:49:39 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:31.626 14:49:40 -- ftl/common.sh@68 -- # lvs=d67ae1a5-b914-4937-89fe-9366d55647c5 00:30:31.626 14:49:40 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d67ae1a5-b914-4937-89fe-9366d55647c5 00:30:31.884 14:49:40 -- ftl/restore.sh@43 -- # split_bdev=c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:31.884 14:49:40 -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:31.884 14:49:40 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:31.884 14:49:40 -- ftl/common.sh@35 -- # local name=nvc0 00:30:31.884 14:49:40 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:31.884 14:49:40 -- ftl/common.sh@37 -- # local base_bdev=c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:31.884 14:49:40 -- ftl/common.sh@38 -- # local cache_size= 00:30:31.884 14:49:40 -- ftl/common.sh@41 -- # get_bdev_size c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:31.884 14:49:40 -- common/autotest_common.sh@1364 -- # local bdev_name=c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:31.884 14:49:40 -- common/autotest_common.sh@1365 -- # local bdev_info 00:30:31.884 14:49:40 -- common/autotest_common.sh@1366 -- # local bs 00:30:31.884 14:49:40 -- common/autotest_common.sh@1367 -- # local nb 00:30:31.884 14:49:40 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:32.143 14:49:40 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:30:32.143 { 00:30:32.143 "name": "c5761cf4-f763-4d6a-a033-3c2d8591a4a2", 00:30:32.143 "aliases": [ 00:30:32.143 "lvs/nvme0n1p0" 00:30:32.143 ], 00:30:32.143 "product_name": "Logical Volume", 00:30:32.143 "block_size": 4096, 00:30:32.143 "num_blocks": 26476544, 00:30:32.143 "uuid": "c5761cf4-f763-4d6a-a033-3c2d8591a4a2", 00:30:32.143 "assigned_rate_limits": { 00:30:32.143 "rw_ios_per_sec": 0, 00:30:32.143 "rw_mbytes_per_sec": 0, 00:30:32.143 "r_mbytes_per_sec": 0, 00:30:32.143 "w_mbytes_per_sec": 0 00:30:32.143 }, 00:30:32.143 "claimed": false, 00:30:32.143 "zoned": false, 00:30:32.143 "supported_io_types": { 00:30:32.143 "read": true, 00:30:32.143 "write": true, 00:30:32.143 "unmap": true, 00:30:32.143 "write_zeroes": true, 00:30:32.143 "flush": false, 00:30:32.143 "reset": true, 00:30:32.143 "compare": false, 00:30:32.143 "compare_and_write": false, 00:30:32.143 "abort": false, 00:30:32.143 "nvme_admin": false, 00:30:32.143 "nvme_io": false 00:30:32.143 }, 00:30:32.143 "driver_specific": { 00:30:32.143 "lvol": { 00:30:32.143 "lvol_store_uuid": "d67ae1a5-b914-4937-89fe-9366d55647c5", 00:30:32.143 "base_bdev": "nvme0n1", 00:30:32.143 "thin_provision": true, 00:30:32.143 "snapshot": false, 00:30:32.143 "clone": false, 00:30:32.143 "esnap_clone": false 00:30:32.143 } 00:30:32.143 } 00:30:32.143 } 00:30:32.143 ]' 00:30:32.143 14:49:40 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:30:32.143 14:49:40 -- common/autotest_common.sh@1369 -- # bs=4096 00:30:32.143 14:49:40 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:30:32.143 14:49:40 -- common/autotest_common.sh@1370 -- # nb=26476544 00:30:32.143 14:49:40 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:30:32.143 14:49:40 -- common/autotest_common.sh@1374 -- # echo 103424 00:30:32.143 14:49:40 -- ftl/common.sh@41 -- # local base_size=5171 00:30:32.143 14:49:40 -- ftl/common.sh@44 -- # local nvc_bdev 00:30:32.143 14:49:40 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:32.401 14:49:40 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:32.401 14:49:40 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:32.401 14:49:40 -- ftl/common.sh@48 -- # get_bdev_size c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:32.401 14:49:40 -- common/autotest_common.sh@1364 -- # local bdev_name=c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:32.401 14:49:40 -- common/autotest_common.sh@1365 -- # local bdev_info 00:30:32.401 14:49:40 -- common/autotest_common.sh@1366 -- # local bs 00:30:32.401 14:49:40 -- common/autotest_common.sh@1367 -- # local nb 00:30:32.401 14:49:40 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:32.967 14:49:41 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:30:32.967 { 00:30:32.967 "name": "c5761cf4-f763-4d6a-a033-3c2d8591a4a2", 00:30:32.967 "aliases": [ 00:30:32.967 "lvs/nvme0n1p0" 00:30:32.967 ], 00:30:32.967 "product_name": "Logical Volume", 00:30:32.967 "block_size": 4096, 00:30:32.967 "num_blocks": 26476544, 00:30:32.967 "uuid": "c5761cf4-f763-4d6a-a033-3c2d8591a4a2", 00:30:32.967 "assigned_rate_limits": { 00:30:32.967 "rw_ios_per_sec": 0, 00:30:32.967 "rw_mbytes_per_sec": 0, 00:30:32.967 "r_mbytes_per_sec": 0, 00:30:32.967 "w_mbytes_per_sec": 0 00:30:32.967 }, 00:30:32.967 "claimed": false, 00:30:32.967 "zoned": false, 00:30:32.967 "supported_io_types": { 00:30:32.967 "read": true, 00:30:32.967 "write": true, 00:30:32.967 "unmap": true, 00:30:32.967 "write_zeroes": true, 00:30:32.967 "flush": false, 00:30:32.967 "reset": true, 00:30:32.967 "compare": false, 00:30:32.967 "compare_and_write": false, 00:30:32.967 "abort": false, 00:30:32.967 "nvme_admin": false, 00:30:32.967 "nvme_io": false 00:30:32.967 }, 00:30:32.967 "driver_specific": { 00:30:32.967 "lvol": { 00:30:32.967 "lvol_store_uuid": "d67ae1a5-b914-4937-89fe-9366d55647c5", 00:30:32.967 "base_bdev": "nvme0n1", 00:30:32.967 "thin_provision": true, 00:30:32.967 "snapshot": false, 00:30:32.967 "clone": false, 00:30:32.967 "esnap_clone": false 00:30:32.967 } 00:30:32.967 } 00:30:32.967 } 00:30:32.967 ]' 00:30:32.967 14:49:41 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:30:32.967 14:49:41 -- common/autotest_common.sh@1369 -- # bs=4096 00:30:32.967 14:49:41 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:30:32.967 14:49:41 -- common/autotest_common.sh@1370 -- # nb=26476544 00:30:32.967 14:49:41 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:30:32.967 14:49:41 -- common/autotest_common.sh@1374 -- # echo 103424 00:30:32.967 14:49:41 -- ftl/common.sh@48 -- # cache_size=5171 00:30:32.967 14:49:41 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:33.225 14:49:41 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:33.225 14:49:41 -- ftl/restore.sh@48 -- # get_bdev_size c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:33.225 14:49:41 -- common/autotest_common.sh@1364 -- # local bdev_name=c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:33.225 14:49:41 -- common/autotest_common.sh@1365 -- # local bdev_info 00:30:33.225 14:49:41 -- common/autotest_common.sh@1366 -- # local bs 00:30:33.225 14:49:41 -- common/autotest_common.sh@1367 -- # local nb 00:30:33.225 14:49:41 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5761cf4-f763-4d6a-a033-3c2d8591a4a2 00:30:33.483 14:49:41 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:30:33.483 { 00:30:33.483 "name": "c5761cf4-f763-4d6a-a033-3c2d8591a4a2", 00:30:33.483 "aliases": [ 00:30:33.483 "lvs/nvme0n1p0" 00:30:33.483 ], 00:30:33.483 "product_name": "Logical Volume", 00:30:33.483 "block_size": 4096, 00:30:33.483 "num_blocks": 26476544, 00:30:33.483 "uuid": "c5761cf4-f763-4d6a-a033-3c2d8591a4a2", 00:30:33.483 "assigned_rate_limits": { 00:30:33.483 "rw_ios_per_sec": 0, 00:30:33.483 "rw_mbytes_per_sec": 0, 00:30:33.483 "r_mbytes_per_sec": 0, 00:30:33.483 "w_mbytes_per_sec": 0 00:30:33.483 }, 00:30:33.483 "claimed": false, 00:30:33.483 "zoned": false, 00:30:33.483 "supported_io_types": { 00:30:33.483 "read": true, 00:30:33.483 "write": true, 00:30:33.483 "unmap": true, 00:30:33.483 "write_zeroes": true, 00:30:33.483 "flush": false, 00:30:33.483 "reset": true, 00:30:33.483 "compare": false, 00:30:33.483 "compare_and_write": false, 00:30:33.483 "abort": false, 00:30:33.483 "nvme_admin": false, 00:30:33.483 "nvme_io": false 00:30:33.483 }, 00:30:33.483 "driver_specific": { 00:30:33.483 "lvol": { 00:30:33.483 "lvol_store_uuid": "d67ae1a5-b914-4937-89fe-9366d55647c5", 00:30:33.483 "base_bdev": "nvme0n1", 00:30:33.483 "thin_provision": true, 00:30:33.483 "snapshot": false, 00:30:33.483 "clone": false, 00:30:33.483 "esnap_clone": false 00:30:33.483 } 00:30:33.483 } 00:30:33.483 } 00:30:33.483 ]' 00:30:33.483 14:49:41 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:30:33.483 14:49:41 -- common/autotest_common.sh@1369 -- # bs=4096 00:30:33.483 14:49:41 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:30:33.483 14:49:42 -- common/autotest_common.sh@1370 -- # nb=26476544 00:30:33.483 14:49:42 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:30:33.483 14:49:42 -- common/autotest_common.sh@1374 -- # echo 103424 00:30:33.483 14:49:42 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:33.483 14:49:42 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c5761cf4-f763-4d6a-a033-3c2d8591a4a2 --l2p_dram_limit 10' 00:30:33.483 14:49:42 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:33.483 14:49:42 -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:33.483 14:49:42 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:33.483 14:49:42 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:30:33.483 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:30:33.483 14:49:42 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c5761cf4-f763-4d6a-a033-3c2d8591a4a2 --l2p_dram_limit 10 -c nvc0n1p0 00:30:33.741 [2024-04-17 14:49:42.285017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.741 [2024-04-17 14:49:42.285278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:33.741 [2024-04-17 14:49:42.285389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:33.741 [2024-04-17 14:49:42.285432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.741 [2024-04-17 14:49:42.285551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.741 [2024-04-17 14:49:42.285593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:33.741 [2024-04-17 14:49:42.285706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:30:33.741 [2024-04-17 14:49:42.285745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.741 [2024-04-17 14:49:42.285802] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:33.741 [2024-04-17 14:49:42.287150] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:33.741 [2024-04-17 14:49:42.287328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.741 [2024-04-17 14:49:42.287411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:33.741 [2024-04-17 14:49:42.287458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:30:33.741 [2024-04-17 14:49:42.287577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.741 [2024-04-17 14:49:42.287791] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ef2062c2-7b02-4ec3-91e2-3a1b1495b72a 00:30:33.741 [2024-04-17 14:49:42.289289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.741 [2024-04-17 14:49:42.289467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:33.741 [2024-04-17 14:49:42.289578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:30:33.741 [2024-04-17 14:49:42.289623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.741 [2024-04-17 14:49:42.297409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.741 [2024-04-17 14:49:42.297612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:33.741 [2024-04-17 14:49:42.297711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.671 ms 00:30:33.741 [2024-04-17 14:49:42.297756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.742 [2024-04-17 14:49:42.297989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.742 [2024-04-17 14:49:42.298034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:33.742 [2024-04-17 14:49:42.298071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:30:33.742 [2024-04-17 14:49:42.298108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.742 [2024-04-17 14:49:42.298284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.742 [2024-04-17 14:49:42.298333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:33.742 [2024-04-17 14:49:42.298537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:33.742 [2024-04-17 14:49:42.298590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.742 [2024-04-17 14:49:42.298648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:33.742 [2024-04-17 14:49:42.305051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.742 [2024-04-17 14:49:42.305178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:33.742 [2024-04-17 14:49:42.305252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.407 ms 00:30:33.742 [2024-04-17 14:49:42.305287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.742 [2024-04-17 14:49:42.305346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.742 [2024-04-17 14:49:42.305377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:33.742 [2024-04-17 14:49:42.305410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:33.742 [2024-04-17 14:49:42.305439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.742 [2024-04-17 14:49:42.305604] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:33.742 [2024-04-17 14:49:42.305742] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:30:33.742 [2024-04-17 14:49:42.305931] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:33.742 [2024-04-17 14:49:42.305985] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:30:33.742 [2024-04-17 14:49:42.306038] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:33.742 [2024-04-17 14:49:42.306168] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:33.742 [2024-04-17 14:49:42.306242] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:33.742 [2024-04-17 14:49:42.306274] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:33.742 [2024-04-17 14:49:42.306309] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:30:33.742 [2024-04-17 14:49:42.306342] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:30:33.742 [2024-04-17 14:49:42.306410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.742 [2024-04-17 14:49:42.306553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:33.742 [2024-04-17 14:49:42.306601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:30:33.742 [2024-04-17 14:49:42.306636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.742 [2024-04-17 14:49:42.306736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.742 [2024-04-17 14:49:42.306825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:33.742 [2024-04-17 14:49:42.306872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:33.742 [2024-04-17 14:49:42.306907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.742 [2024-04-17 14:49:42.307016] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:33.742 [2024-04-17 14:49:42.307059] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:33.742 [2024-04-17 14:49:42.307097] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:33.742 [2024-04-17 14:49:42.307178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:33.742 [2024-04-17 14:49:42.307225] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:33.742 [2024-04-17 14:49:42.307259] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:33.742 [2024-04-17 14:49:42.307343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:33.742 [2024-04-17 14:49:42.307383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:33.742 [2024-04-17 14:49:42.307419] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:33.742 [2024-04-17 14:49:42.307487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:33.742 [2024-04-17 14:49:42.307545] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:33.742 [2024-04-17 14:49:42.307692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:33.742 [2024-04-17 14:49:42.307733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:33.742 [2024-04-17 14:49:42.307765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:33.742 [2024-04-17 14:49:42.307803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:30:33.742 [2024-04-17 14:49:42.307835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:33.742 [2024-04-17 14:49:42.307994] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:33.742 [2024-04-17 14:49:42.308032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:30:33.742 [2024-04-17 14:49:42.308067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:33.742 [2024-04-17 14:49:42.308099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:30:33.742 [2024-04-17 14:49:42.308182] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:30:33.742 [2024-04-17 14:49:42.308220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:30:33.742 [2024-04-17 14:49:42.308255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:33.742 [2024-04-17 14:49:42.308286] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:33.742 [2024-04-17 14:49:42.308354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:33.742 [2024-04-17 14:49:42.308390] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:33.742 [2024-04-17 14:49:42.308424] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:30:33.742 [2024-04-17 14:49:42.308455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:33.742 [2024-04-17 14:49:42.308499] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:33.742 [2024-04-17 14:49:42.308581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:33.742 [2024-04-17 14:49:42.308622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:33.742 [2024-04-17 14:49:42.308653] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:33.742 [2024-04-17 14:49:42.308687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:30:33.742 [2024-04-17 14:49:42.308719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:33.742 [2024-04-17 14:49:42.308801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:33.742 [2024-04-17 14:49:42.308833] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:33.742 [2024-04-17 14:49:42.308870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:33.742 [2024-04-17 14:49:42.308938] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:33.742 [2024-04-17 14:49:42.308977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:30:33.742 [2024-04-17 14:49:42.309053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:33.742 [2024-04-17 14:49:42.309093] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:33.742 [2024-04-17 14:49:42.309126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:33.742 [2024-04-17 14:49:42.309195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:33.742 [2024-04-17 14:49:42.309276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:33.742 [2024-04-17 14:49:42.309317] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:33.742 [2024-04-17 14:49:42.309380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:33.742 [2024-04-17 14:49:42.309419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:33.742 [2024-04-17 14:49:42.309452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:33.742 [2024-04-17 14:49:42.309546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:33.742 [2024-04-17 14:49:42.309587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:33.742 [2024-04-17 14:49:42.309623] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:33.742 [2024-04-17 14:49:42.309728] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:33.743 [2024-04-17 14:49:42.309789] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:33.743 [2024-04-17 14:49:42.309886] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:30:33.743 [2024-04-17 14:49:42.309946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:30:33.743 [2024-04-17 14:49:42.309997] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:30:33.743 [2024-04-17 14:49:42.310086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:30:33.743 [2024-04-17 14:49:42.310139] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:30:33.743 [2024-04-17 14:49:42.310192] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:30:33.743 [2024-04-17 14:49:42.310291] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:30:33.743 [2024-04-17 14:49:42.310348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:30:33.743 [2024-04-17 14:49:42.310430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:30:33.743 [2024-04-17 14:49:42.310574] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:30:33.743 [2024-04-17 14:49:42.310632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:30:33.743 [2024-04-17 14:49:42.310690] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:30:33.743 [2024-04-17 14:49:42.310783] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:33.743 [2024-04-17 14:49:42.310841] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:33.743 [2024-04-17 14:49:42.310941] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:33.743 [2024-04-17 14:49:42.311053] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:33.743 [2024-04-17 14:49:42.311113] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:33.743 [2024-04-17 14:49:42.311209] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:33.743 [2024-04-17 14:49:42.311311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.743 [2024-04-17 14:49:42.311367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:33.743 [2024-04-17 14:49:42.311404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.340 ms 00:30:33.743 [2024-04-17 14:49:42.311441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.743 [2024-04-17 14:49:42.337119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.743 [2024-04-17 14:49:42.337346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:33.743 [2024-04-17 14:49:42.337425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.556 ms 00:30:33.743 [2024-04-17 14:49:42.337464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.743 [2024-04-17 14:49:42.337598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.743 [2024-04-17 14:49:42.337641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:33.743 [2024-04-17 14:49:42.337672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:33.743 [2024-04-17 14:49:42.337761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.001 [2024-04-17 14:49:42.393543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.001 [2024-04-17 14:49:42.393771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:34.001 [2024-04-17 14:49:42.393932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.678 ms 00:30:34.001 [2024-04-17 14:49:42.393980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.001 [2024-04-17 14:49:42.394056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.001 [2024-04-17 14:49:42.394163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:34.001 [2024-04-17 14:49:42.394204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:34.001 [2024-04-17 14:49:42.394245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.001 [2024-04-17 14:49:42.394859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.001 [2024-04-17 14:49:42.394991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:34.001 [2024-04-17 14:49:42.395070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:30:34.001 [2024-04-17 14:49:42.395112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.001 [2024-04-17 14:49:42.395373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.001 [2024-04-17 14:49:42.395463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:34.001 [2024-04-17 14:49:42.395558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:30:34.001 [2024-04-17 14:49:42.395601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.001 [2024-04-17 14:49:42.421645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.001 [2024-04-17 14:49:42.421851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:34.001 [2024-04-17 14:49:42.421928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.957 ms 00:30:34.001 [2024-04-17 14:49:42.421971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.001 [2024-04-17 14:49:42.437322] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:34.001 [2024-04-17 14:49:42.440865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.001 [2024-04-17 14:49:42.441019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:34.001 [2024-04-17 14:49:42.441109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.749 ms 00:30:34.001 [2024-04-17 14:49:42.441147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.001 [2024-04-17 14:49:42.529349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.001 [2024-04-17 14:49:42.529656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:34.001 [2024-04-17 14:49:42.529756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.125 ms 00:30:34.001 [2024-04-17 14:49:42.529799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.001 [2024-04-17 14:49:42.529892] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:30:34.001 [2024-04-17 14:49:42.529959] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:30:37.338 [2024-04-17 14:49:45.561404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.338 [2024-04-17 14:49:45.561622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:37.338 [2024-04-17 14:49:45.561754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3031.490 ms 00:30:37.338 [2024-04-17 14:49:45.561799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.338 [2024-04-17 14:49:45.562060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.338 [2024-04-17 14:49:45.562111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:37.338 [2024-04-17 14:49:45.562204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:30:37.338 [2024-04-17 14:49:45.562245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.338 [2024-04-17 14:49:45.611070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.339 [2024-04-17 14:49:45.611294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:37.339 [2024-04-17 14:49:45.611402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.703 ms 00:30:37.339 [2024-04-17 14:49:45.611445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.339 [2024-04-17 14:49:45.660135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.339 [2024-04-17 14:49:45.660361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:37.339 [2024-04-17 14:49:45.660499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.481 ms 00:30:37.339 [2024-04-17 14:49:45.660542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.339 [2024-04-17 14:49:45.661076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.339 [2024-04-17 14:49:45.661206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:37.339 [2024-04-17 14:49:45.661329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:30:37.339 [2024-04-17 14:49:45.661376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.339 [2024-04-17 14:49:45.776661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.339 [2024-04-17 14:49:45.776921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:37.339 [2024-04-17 14:49:45.777024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.159 ms 00:30:37.339 [2024-04-17 14:49:45.777067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.339 [2024-04-17 14:49:45.823704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.339 [2024-04-17 14:49:45.823971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:37.339 [2024-04-17 14:49:45.824092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.536 ms 00:30:37.339 [2024-04-17 14:49:45.824132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.339 [2024-04-17 14:49:45.826642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.339 [2024-04-17 14:49:45.826772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:30:37.339 [2024-04-17 14:49:45.826857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.414 ms 00:30:37.339 [2024-04-17 14:49:45.826899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.339 [2024-04-17 14:49:45.876250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.339 [2024-04-17 14:49:45.876483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:37.339 [2024-04-17 14:49:45.876652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.221 ms 00:30:37.339 [2024-04-17 14:49:45.876734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.339 [2024-04-17 14:49:45.876874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.339 [2024-04-17 14:49:45.876974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:37.339 [2024-04-17 14:49:45.877056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:37.339 [2024-04-17 14:49:45.877099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.339 [2024-04-17 14:49:45.877337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:37.339 [2024-04-17 14:49:45.877437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:37.339 [2024-04-17 14:49:45.877555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:37.339 [2024-04-17 14:49:45.877603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:37.339 [2024-04-17 14:49:45.878876] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3593.280 ms, result 0 00:30:37.339 { 00:30:37.339 "name": "ftl0", 00:30:37.339 "uuid": "ef2062c2-7b02-4ec3-91e2-3a1b1495b72a" 00:30:37.339 } 00:30:37.339 14:49:45 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:37.339 14:49:45 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:37.612 14:49:46 -- ftl/restore.sh@63 -- # echo ']}' 00:30:37.612 14:49:46 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:38.180 [2024-04-17 14:49:46.501698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.180 [2024-04-17 14:49:46.501997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:38.180 [2024-04-17 14:49:46.502091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:38.180 [2024-04-17 14:49:46.502135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.180 [2024-04-17 14:49:46.502199] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:38.180 [2024-04-17 14:49:46.505693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.180 [2024-04-17 14:49:46.505858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:38.180 [2024-04-17 14:49:46.506025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.420 ms 00:30:38.180 [2024-04-17 14:49:46.506065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.180 [2024-04-17 14:49:46.506419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.180 [2024-04-17 14:49:46.506619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:38.180 [2024-04-17 14:49:46.506673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:30:38.180 [2024-04-17 14:49:46.506765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.180 [2024-04-17 14:49:46.509511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.180 [2024-04-17 14:49:46.509632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:38.180 [2024-04-17 14:49:46.509751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:30:38.180 [2024-04-17 14:49:46.509792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.180 [2024-04-17 14:49:46.515364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.180 [2024-04-17 14:49:46.515518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:30:38.180 [2024-04-17 14:49:46.515611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.513 ms 00:30:38.180 [2024-04-17 14:49:46.515647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.180 [2024-04-17 14:49:46.555703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.180 [2024-04-17 14:49:46.555901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:38.180 [2024-04-17 14:49:46.556029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.921 ms 00:30:38.180 [2024-04-17 14:49:46.556067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.180 [2024-04-17 14:49:46.580394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.180 [2024-04-17 14:49:46.580583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:38.180 [2024-04-17 14:49:46.580715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.258 ms 00:30:38.180 [2024-04-17 14:49:46.580757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.180 [2024-04-17 14:49:46.580947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.180 [2024-04-17 14:49:46.581082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:38.180 [2024-04-17 14:49:46.581165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:30:38.180 [2024-04-17 14:49:46.581198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.180 [2024-04-17 14:49:46.622797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.180 [2024-04-17 14:49:46.623009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:38.180 [2024-04-17 14:49:46.623096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.546 ms 00:30:38.180 [2024-04-17 14:49:46.623137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.180 [2024-04-17 14:49:46.668177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.181 [2024-04-17 14:49:46.668407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:38.181 [2024-04-17 14:49:46.668567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.940 ms 00:30:38.181 [2024-04-17 14:49:46.668611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.181 [2024-04-17 14:49:46.715447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.181 [2024-04-17 14:49:46.715647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:38.181 [2024-04-17 14:49:46.715781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.726 ms 00:30:38.181 [2024-04-17 14:49:46.715824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.181 [2024-04-17 14:49:46.762573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.181 [2024-04-17 14:49:46.762783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:38.181 [2024-04-17 14:49:46.762894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.531 ms 00:30:38.181 [2024-04-17 14:49:46.762937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.181 [2024-04-17 14:49:46.763033] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:38.181 [2024-04-17 14:49:46.763085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.763196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.763255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.763320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.763428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.763566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.763658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.763718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.763831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.763893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.763985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.764046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.764144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.764206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.764326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.764394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.764541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.764654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.764714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.764777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.764884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.764951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.765991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.766982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.767133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.767202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.767306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.767373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.767429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.767558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.767618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.767723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.767851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.767914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.767969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.768073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.768128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.768185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:38.181 [2024-04-17 14:49:46.768240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.768345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.768399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.768456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.768532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.768632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.768687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.768746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.768851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.768921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.768976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.769956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.770990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.771097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.771157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.771222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:38.182 [2024-04-17 14:49:46.771329] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:38.182 [2024-04-17 14:49:46.771368] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef2062c2-7b02-4ec3-91e2-3a1b1495b72a 00:30:38.182 [2024-04-17 14:49:46.771477] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:38.182 [2024-04-17 14:49:46.771529] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:38.182 [2024-04-17 14:49:46.771599] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:38.182 [2024-04-17 14:49:46.771654] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:38.182 [2024-04-17 14:49:46.771688] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:38.182 [2024-04-17 14:49:46.771726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:38.182 [2024-04-17 14:49:46.771760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:38.182 [2024-04-17 14:49:46.771796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:38.182 [2024-04-17 14:49:46.771829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:38.182 [2024-04-17 14:49:46.771909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.182 [2024-04-17 14:49:46.771956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:38.182 [2024-04-17 14:49:46.771996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.877 ms 00:30:38.182 [2024-04-17 14:49:46.772031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.441 [2024-04-17 14:49:46.796872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.441 [2024-04-17 14:49:46.797113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:38.441 [2024-04-17 14:49:46.797221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.703 ms 00:30:38.441 [2024-04-17 14:49:46.797263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.441 [2024-04-17 14:49:46.797652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.441 [2024-04-17 14:49:46.797769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:38.441 [2024-04-17 14:49:46.797857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:30:38.441 [2024-04-17 14:49:46.797897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.441 [2024-04-17 14:49:46.880055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.441 [2024-04-17 14:49:46.880293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:38.441 [2024-04-17 14:49:46.880386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.441 [2024-04-17 14:49:46.880428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.441 [2024-04-17 14:49:46.880558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.441 [2024-04-17 14:49:46.880655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:38.441 [2024-04-17 14:49:46.880700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.441 [2024-04-17 14:49:46.880734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.441 [2024-04-17 14:49:46.880923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.441 [2024-04-17 14:49:46.881027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:38.441 [2024-04-17 14:49:46.881111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.441 [2024-04-17 14:49:46.881151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.441 [2024-04-17 14:49:46.881205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.441 [2024-04-17 14:49:46.881293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:38.441 [2024-04-17 14:49:46.881338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.441 [2024-04-17 14:49:46.881373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.441 [2024-04-17 14:49:47.022949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.441 [2024-04-17 14:49:47.023199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:38.441 [2024-04-17 14:49:47.023336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.441 [2024-04-17 14:49:47.023379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.700 [2024-04-17 14:49:47.077712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.700 [2024-04-17 14:49:47.077964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:38.700 [2024-04-17 14:49:47.078048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.700 [2024-04-17 14:49:47.078087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.700 [2024-04-17 14:49:47.078219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.700 [2024-04-17 14:49:47.078259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:38.700 [2024-04-17 14:49:47.078343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.700 [2024-04-17 14:49:47.078393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.700 [2024-04-17 14:49:47.078507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.700 [2024-04-17 14:49:47.078550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:38.700 [2024-04-17 14:49:47.078590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.700 [2024-04-17 14:49:47.078677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.700 [2024-04-17 14:49:47.078836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.700 [2024-04-17 14:49:47.078875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:38.700 [2024-04-17 14:49:47.078961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.700 [2024-04-17 14:49:47.078999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.700 [2024-04-17 14:49:47.079081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.700 [2024-04-17 14:49:47.079119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:38.700 [2024-04-17 14:49:47.079197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.700 [2024-04-17 14:49:47.079294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.700 [2024-04-17 14:49:47.079372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.700 [2024-04-17 14:49:47.079444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:38.700 [2024-04-17 14:49:47.079487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.700 [2024-04-17 14:49:47.079621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.700 [2024-04-17 14:49:47.079724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:38.700 [2024-04-17 14:49:47.079772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:38.700 [2024-04-17 14:49:47.079813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:38.700 [2024-04-17 14:49:47.079845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.700 [2024-04-17 14:49:47.080007] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 578.269 ms, result 0 00:30:38.700 true 00:30:38.700 14:49:47 -- ftl/restore.sh@66 -- # killprocess 79769 00:30:38.700 14:49:47 -- common/autotest_common.sh@936 -- # '[' -z 79769 ']' 00:30:38.700 14:49:47 -- common/autotest_common.sh@940 -- # kill -0 79769 00:30:38.700 14:49:47 -- common/autotest_common.sh@941 -- # uname 00:30:38.700 14:49:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:38.700 14:49:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79769 00:30:38.700 14:49:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:38.700 14:49:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:38.700 14:49:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79769' 00:30:38.700 killing process with pid 79769 00:30:38.700 14:49:47 -- common/autotest_common.sh@955 -- # kill 79769 00:30:38.700 14:49:47 -- common/autotest_common.sh@960 -- # wait 79769 00:30:45.264 14:49:52 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:49.489 262144+0 records in 00:30:49.489 262144+0 records out 00:30:49.489 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.87552 s, 220 MB/s 00:30:49.489 14:49:57 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:51.392 14:49:59 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:51.392 [2024-04-17 14:49:59.858096] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:30:51.392 [2024-04-17 14:49:59.858533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80039 ] 00:30:51.650 [2024-04-17 14:50:00.045723] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.909 [2024-04-17 14:50:00.380121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.477 [2024-04-17 14:50:00.862163] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:52.477 [2024-04-17 14:50:00.862460] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:52.477 [2024-04-17 14:50:01.017814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.477 [2024-04-17 14:50:01.018062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:52.477 [2024-04-17 14:50:01.018204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:52.477 [2024-04-17 14:50:01.018253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.477 [2024-04-17 14:50:01.018383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.477 [2024-04-17 14:50:01.018608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:52.477 [2024-04-17 14:50:01.018664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:30:52.477 [2024-04-17 14:50:01.018707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.477 [2024-04-17 14:50:01.018834] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:52.477 [2024-04-17 14:50:01.020300] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:52.477 [2024-04-17 14:50:01.020467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.477 [2024-04-17 14:50:01.020568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:52.477 [2024-04-17 14:50:01.020608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.642 ms 00:30:52.477 [2024-04-17 14:50:01.020642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.477 [2024-04-17 14:50:01.022344] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:52.477 [2024-04-17 14:50:01.046422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.477 [2024-04-17 14:50:01.046620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:52.477 [2024-04-17 14:50:01.046718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.076 ms 00:30:52.477 [2024-04-17 14:50:01.046762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.477 [2024-04-17 14:50:01.046864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.477 [2024-04-17 14:50:01.046972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:52.477 [2024-04-17 14:50:01.047010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:52.477 [2024-04-17 14:50:01.047044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.477 [2024-04-17 14:50:01.054584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.477 [2024-04-17 14:50:01.054736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:52.477 [2024-04-17 14:50:01.054823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.411 ms 00:30:52.477 [2024-04-17 14:50:01.054863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.477 [2024-04-17 14:50:01.055002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.477 [2024-04-17 14:50:01.055045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:52.477 [2024-04-17 14:50:01.055080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:30:52.477 [2024-04-17 14:50:01.055150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.477 [2024-04-17 14:50:01.055220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.477 [2024-04-17 14:50:01.055305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:52.477 [2024-04-17 14:50:01.055344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:52.477 [2024-04-17 14:50:01.055392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.477 [2024-04-17 14:50:01.055448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:52.477 [2024-04-17 14:50:01.061982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.477 [2024-04-17 14:50:01.062131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:52.477 [2024-04-17 14:50:01.062214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.542 ms 00:30:52.477 [2024-04-17 14:50:01.062255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.477 [2024-04-17 14:50:01.062381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.477 [2024-04-17 14:50:01.062424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:52.477 [2024-04-17 14:50:01.062457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:52.477 [2024-04-17 14:50:01.062584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.477 [2024-04-17 14:50:01.062718] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:52.477 [2024-04-17 14:50:01.062817] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:30:52.477 [2024-04-17 14:50:01.062951] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:52.477 [2024-04-17 14:50:01.063048] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:30:52.477 [2024-04-17 14:50:01.063208] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:30:52.477 [2024-04-17 14:50:01.063312] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:52.477 [2024-04-17 14:50:01.063419] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:30:52.477 [2024-04-17 14:50:01.063521] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:52.477 [2024-04-17 14:50:01.063584] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:52.477 [2024-04-17 14:50:01.063742] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:52.477 [2024-04-17 14:50:01.063777] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:52.477 [2024-04-17 14:50:01.063809] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:30:52.478 [2024-04-17 14:50:01.063841] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:30:52.478 [2024-04-17 14:50:01.063874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.478 [2024-04-17 14:50:01.063932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:52.478 [2024-04-17 14:50:01.063967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:30:52.478 [2024-04-17 14:50:01.063999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.478 [2024-04-17 14:50:01.064125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.478 [2024-04-17 14:50:01.064200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:52.478 [2024-04-17 14:50:01.064234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:52.478 [2024-04-17 14:50:01.064266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.478 [2024-04-17 14:50:01.064361] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:52.478 [2024-04-17 14:50:01.064397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:52.478 [2024-04-17 14:50:01.064462] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:52.478 [2024-04-17 14:50:01.064496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.478 [2024-04-17 14:50:01.064541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:52.478 [2024-04-17 14:50:01.064575] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:52.478 [2024-04-17 14:50:01.064607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:52.478 [2024-04-17 14:50:01.064639] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:52.478 [2024-04-17 14:50:01.064670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:52.478 [2024-04-17 14:50:01.064779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:52.478 [2024-04-17 14:50:01.064819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:52.478 [2024-04-17 14:50:01.064851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:52.478 [2024-04-17 14:50:01.064895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:52.478 [2024-04-17 14:50:01.064926] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:52.478 [2024-04-17 14:50:01.064984] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:30:52.478 [2024-04-17 14:50:01.065016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.478 [2024-04-17 14:50:01.065047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:52.478 [2024-04-17 14:50:01.065079] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:30:52.478 [2024-04-17 14:50:01.065153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.478 [2024-04-17 14:50:01.065232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:30:52.478 [2024-04-17 14:50:01.065270] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:30:52.478 [2024-04-17 14:50:01.065330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:30:52.478 [2024-04-17 14:50:01.065366] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:52.478 [2024-04-17 14:50:01.065397] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:52.478 [2024-04-17 14:50:01.065469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:52.478 [2024-04-17 14:50:01.065516] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:52.478 [2024-04-17 14:50:01.065549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:30:52.478 [2024-04-17 14:50:01.065581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:52.478 [2024-04-17 14:50:01.065612] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:52.478 [2024-04-17 14:50:01.065643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:52.478 [2024-04-17 14:50:01.065674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:52.478 [2024-04-17 14:50:01.065750] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:52.478 [2024-04-17 14:50:01.065809] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:30:52.478 [2024-04-17 14:50:01.065843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:30:52.478 [2024-04-17 14:50:01.065873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:52.478 [2024-04-17 14:50:01.065904] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:52.478 [2024-04-17 14:50:01.065936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:52.478 [2024-04-17 14:50:01.065968] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:52.478 [2024-04-17 14:50:01.065999] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:30:52.478 [2024-04-17 14:50:01.066031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:52.478 [2024-04-17 14:50:01.066098] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:52.478 [2024-04-17 14:50:01.066140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:52.478 [2024-04-17 14:50:01.066172] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:52.478 [2024-04-17 14:50:01.066208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.478 [2024-04-17 14:50:01.066241] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:52.478 [2024-04-17 14:50:01.066273] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:52.478 [2024-04-17 14:50:01.066366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:52.478 [2024-04-17 14:50:01.066420] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:52.478 [2024-04-17 14:50:01.066454] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:52.478 [2024-04-17 14:50:01.066487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:52.478 [2024-04-17 14:50:01.066534] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:52.478 [2024-04-17 14:50:01.066592] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:52.478 [2024-04-17 14:50:01.066648] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:52.478 [2024-04-17 14:50:01.066837] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:30:52.478 [2024-04-17 14:50:01.066894] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:30:52.478 [2024-04-17 14:50:01.066993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:30:52.478 [2024-04-17 14:50:01.067052] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:30:52.478 [2024-04-17 14:50:01.067107] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:30:52.478 [2024-04-17 14:50:01.067203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:30:52.478 [2024-04-17 14:50:01.067258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:30:52.478 [2024-04-17 14:50:01.067379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:30:52.478 [2024-04-17 14:50:01.067475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:30:52.478 [2024-04-17 14:50:01.067549] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:30:52.478 [2024-04-17 14:50:01.067651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:30:52.478 [2024-04-17 14:50:01.067708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:30:52.478 [2024-04-17 14:50:01.067827] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:52.478 [2024-04-17 14:50:01.067888] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:52.478 [2024-04-17 14:50:01.067945] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:52.478 [2024-04-17 14:50:01.068023] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:52.478 [2024-04-17 14:50:01.068112] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:52.478 [2024-04-17 14:50:01.068171] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:52.478 [2024-04-17 14:50:01.068266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.478 [2024-04-17 14:50:01.068301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:52.479 [2024-04-17 14:50:01.068336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.944 ms 00:30:52.479 [2024-04-17 14:50:01.068369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.741 [2024-04-17 14:50:01.095976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.741 [2024-04-17 14:50:01.096114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:52.741 [2024-04-17 14:50:01.096199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.507 ms 00:30:52.742 [2024-04-17 14:50:01.096234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.742 [2024-04-17 14:50:01.096341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.742 [2024-04-17 14:50:01.096378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:52.742 [2024-04-17 14:50:01.096408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:52.742 [2024-04-17 14:50:01.096437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.742 [2024-04-17 14:50:01.164978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.742 [2024-04-17 14:50:01.165157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:52.742 [2024-04-17 14:50:01.165307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.356 ms 00:30:52.742 [2024-04-17 14:50:01.165351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.742 [2024-04-17 14:50:01.165432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.742 [2024-04-17 14:50:01.165465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:52.742 [2024-04-17 14:50:01.165506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:52.742 [2024-04-17 14:50:01.165586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.742 [2024-04-17 14:50:01.166133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.742 [2024-04-17 14:50:01.166197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:52.742 [2024-04-17 14:50:01.166296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:30:52.742 [2024-04-17 14:50:01.166465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.742 [2024-04-17 14:50:01.166672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.742 [2024-04-17 14:50:01.166780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:52.742 [2024-04-17 14:50:01.166872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:30:52.742 [2024-04-17 14:50:01.166914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.742 [2024-04-17 14:50:01.191734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.742 [2024-04-17 14:50:01.191884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:52.742 [2024-04-17 14:50:01.192021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.764 ms 00:30:52.742 [2024-04-17 14:50:01.192059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.742 [2024-04-17 14:50:01.214229] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:52.742 [2024-04-17 14:50:01.214443] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:52.742 [2024-04-17 14:50:01.214631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.742 [2024-04-17 14:50:01.214671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:52.742 [2024-04-17 14:50:01.214709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.421 ms 00:30:52.742 [2024-04-17 14:50:01.214745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.742 [2024-04-17 14:50:01.248763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.742 [2024-04-17 14:50:01.248927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:52.742 [2024-04-17 14:50:01.249014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.947 ms 00:30:52.742 [2024-04-17 14:50:01.249049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.742 [2024-04-17 14:50:01.270441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.742 [2024-04-17 14:50:01.270636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:52.742 [2024-04-17 14:50:01.270728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.321 ms 00:30:52.742 [2024-04-17 14:50:01.270768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.742 [2024-04-17 14:50:01.291673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.742 [2024-04-17 14:50:01.291814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:52.742 [2024-04-17 14:50:01.291890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.816 ms 00:30:52.742 [2024-04-17 14:50:01.291927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.742 [2024-04-17 14:50:01.292500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.742 [2024-04-17 14:50:01.292591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:52.742 [2024-04-17 14:50:01.292681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:30:52.742 [2024-04-17 14:50:01.292766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.001 [2024-04-17 14:50:01.397233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.001 [2024-04-17 14:50:01.397501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:53.001 [2024-04-17 14:50:01.397649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.411 ms 00:30:53.001 [2024-04-17 14:50:01.397693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.001 [2024-04-17 14:50:01.414151] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:53.001 [2024-04-17 14:50:01.417870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.001 [2024-04-17 14:50:01.418044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:53.001 [2024-04-17 14:50:01.418127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.045 ms 00:30:53.001 [2024-04-17 14:50:01.418167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.001 [2024-04-17 14:50:01.418302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.001 [2024-04-17 14:50:01.418443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:53.001 [2024-04-17 14:50:01.418481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:53.001 [2024-04-17 14:50:01.418575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.001 [2024-04-17 14:50:01.418701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.001 [2024-04-17 14:50:01.418743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:53.001 [2024-04-17 14:50:01.418779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:53.001 [2024-04-17 14:50:01.418812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.001 [2024-04-17 14:50:01.421195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.001 [2024-04-17 14:50:01.421315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:30:53.001 [2024-04-17 14:50:01.421396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.333 ms 00:30:53.001 [2024-04-17 14:50:01.421436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.001 [2024-04-17 14:50:01.421508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.001 [2024-04-17 14:50:01.421549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:53.001 [2024-04-17 14:50:01.421584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:53.001 [2024-04-17 14:50:01.421670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.001 [2024-04-17 14:50:01.421745] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:53.001 [2024-04-17 14:50:01.421785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.001 [2024-04-17 14:50:01.421819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:53.001 [2024-04-17 14:50:01.421855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:53.001 [2024-04-17 14:50:01.421931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.001 [2024-04-17 14:50:01.465538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.001 [2024-04-17 14:50:01.465749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:53.001 [2024-04-17 14:50:01.465883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.545 ms 00:30:53.001 [2024-04-17 14:50:01.465923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.001 [2024-04-17 14:50:01.466054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.001 [2024-04-17 14:50:01.466098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:53.001 [2024-04-17 14:50:01.466191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:53.001 [2024-04-17 14:50:01.466230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.001 [2024-04-17 14:50:01.467585] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 449.224 ms, result 0 00:31:23.082  Copying: 34/1024 [MB] (34 MBps) Copying: 68/1024 [MB] (34 MBps) Copying: 100/1024 [MB] (31 MBps) Copying: 132/1024 [MB] (32 MBps) Copying: 163/1024 [MB] (30 MBps) Copying: 196/1024 [MB] (33 MBps) Copying: 233/1024 [MB] (36 MBps) Copying: 268/1024 [MB] (35 MBps) Copying: 304/1024 [MB] (35 MBps) Copying: 339/1024 [MB] (35 MBps) Copying: 376/1024 [MB] (36 MBps) Copying: 413/1024 [MB] (37 MBps) Copying: 449/1024 [MB] (35 MBps) Copying: 482/1024 [MB] (33 MBps) Copying: 514/1024 [MB] (31 MBps) Copying: 545/1024 [MB] (30 MBps) Copying: 578/1024 [MB] (33 MBps) Copying: 610/1024 [MB] (32 MBps) Copying: 641/1024 [MB] (30 MBps) Copying: 675/1024 [MB] (34 MBps) Copying: 710/1024 [MB] (34 MBps) Copying: 746/1024 [MB] (35 MBps) Copying: 782/1024 [MB] (36 MBps) Copying: 819/1024 [MB] (36 MBps) Copying: 854/1024 [MB] (34 MBps) Copying: 886/1024 [MB] (32 MBps) Copying: 920/1024 [MB] (34 MBps) Copying: 953/1024 [MB] (32 MBps) Copying: 985/1024 [MB] (32 MBps) Copying: 1019/1024 [MB] (34 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-04-17 14:50:31.604264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.082 [2024-04-17 14:50:31.604446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:23.082 [2024-04-17 14:50:31.604580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:23.082 [2024-04-17 14:50:31.604626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.082 [2024-04-17 14:50:31.604732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:23.082 [2024-04-17 14:50:31.608951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.082 [2024-04-17 14:50:31.609100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:23.082 [2024-04-17 14:50:31.609185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.098 ms 00:31:23.082 [2024-04-17 14:50:31.609258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.082 [2024-04-17 14:50:31.610942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.082 [2024-04-17 14:50:31.611095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:23.082 [2024-04-17 14:50:31.611189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:31:23.082 [2024-04-17 14:50:31.611230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.082 [2024-04-17 14:50:31.626974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.082 [2024-04-17 14:50:31.627164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:23.082 [2024-04-17 14:50:31.627265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.650 ms 00:31:23.082 [2024-04-17 14:50:31.627307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.082 [2024-04-17 14:50:31.633104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.082 [2024-04-17 14:50:31.633268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:31:23.082 [2024-04-17 14:50:31.633359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.668 ms 00:31:23.082 [2024-04-17 14:50:31.633406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.082 [2024-04-17 14:50:31.675049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.082 [2024-04-17 14:50:31.675226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:23.082 [2024-04-17 14:50:31.675306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.488 ms 00:31:23.082 [2024-04-17 14:50:31.675375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.342 [2024-04-17 14:50:31.697802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.342 [2024-04-17 14:50:31.697947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:23.342 [2024-04-17 14:50:31.698094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.362 ms 00:31:23.342 [2024-04-17 14:50:31.698131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.342 [2024-04-17 14:50:31.698348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.342 [2024-04-17 14:50:31.698509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:23.342 [2024-04-17 14:50:31.698589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:31:23.342 [2024-04-17 14:50:31.698628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.342 [2024-04-17 14:50:31.738921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.342 [2024-04-17 14:50:31.739227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:23.342 [2024-04-17 14:50:31.739309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.206 ms 00:31:23.342 [2024-04-17 14:50:31.739346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.342 [2024-04-17 14:50:31.782702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.342 [2024-04-17 14:50:31.782952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:23.342 [2024-04-17 14:50:31.783044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.217 ms 00:31:23.342 [2024-04-17 14:50:31.783085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.342 [2024-04-17 14:50:31.824482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.342 [2024-04-17 14:50:31.824698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:23.342 [2024-04-17 14:50:31.824800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.272 ms 00:31:23.342 [2024-04-17 14:50:31.824872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.342 [2024-04-17 14:50:31.869328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.342 [2024-04-17 14:50:31.869598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:23.342 [2024-04-17 14:50:31.869763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.295 ms 00:31:23.342 [2024-04-17 14:50:31.869803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.342 [2024-04-17 14:50:31.869942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:23.342 [2024-04-17 14:50:31.869998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.870144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.870200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.870315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.870387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.870499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.870563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.870642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.870735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.870821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.870877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.870932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.871961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.872114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.872204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.872301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.872386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.872506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.872668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.872758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.872879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.872933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.873044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.873100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.873152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.873286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.873339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:23.342 [2024-04-17 14:50:31.873440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.873499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.873589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.873638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.873705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.873752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.873847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.873961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.874114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.874202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.874349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.874434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.874549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.874606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.874748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.874809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.874962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.875953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.876991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.877945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.878085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.878143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.878220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.878272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.878324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.878446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.878518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:23.343 [2024-04-17 14:50:31.878641] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:23.343 [2024-04-17 14:50:31.878682] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef2062c2-7b02-4ec3-91e2-3a1b1495b72a 00:31:23.343 [2024-04-17 14:50:31.878786] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:23.343 [2024-04-17 14:50:31.878825] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:23.343 [2024-04-17 14:50:31.878860] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:23.343 [2024-04-17 14:50:31.878932] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:23.343 [2024-04-17 14:50:31.878980] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:23.343 [2024-04-17 14:50:31.879051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:23.343 [2024-04-17 14:50:31.879089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:23.343 [2024-04-17 14:50:31.879136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:23.343 [2024-04-17 14:50:31.879170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:23.343 [2024-04-17 14:50:31.879206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.343 [2024-04-17 14:50:31.879240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:23.343 [2024-04-17 14:50:31.879276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.265 ms 00:31:23.343 [2024-04-17 14:50:31.879317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.344 [2024-04-17 14:50:31.900368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.344 [2024-04-17 14:50:31.900650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:23.344 [2024-04-17 14:50:31.900740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.936 ms 00:31:23.344 [2024-04-17 14:50:31.900825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.344 [2024-04-17 14:50:31.901175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.344 [2024-04-17 14:50:31.901274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:23.344 [2024-04-17 14:50:31.901345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:31:23.344 [2024-04-17 14:50:31.901427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.602 [2024-04-17 14:50:31.960689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.602 [2024-04-17 14:50:31.960979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:23.602 [2024-04-17 14:50:31.961080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.602 [2024-04-17 14:50:31.961192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.602 [2024-04-17 14:50:31.961321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.602 [2024-04-17 14:50:31.961357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:23.602 [2024-04-17 14:50:31.961428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.602 [2024-04-17 14:50:31.961538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.602 [2024-04-17 14:50:31.961704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.602 [2024-04-17 14:50:31.961808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:23.602 [2024-04-17 14:50:31.961880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.602 [2024-04-17 14:50:31.961954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.602 [2024-04-17 14:50:31.962008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.602 [2024-04-17 14:50:31.962082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:23.602 [2024-04-17 14:50:31.962119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.602 [2024-04-17 14:50:31.962183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.602 [2024-04-17 14:50:32.092224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.602 [2024-04-17 14:50:32.092554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:23.602 [2024-04-17 14:50:32.092649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.602 [2024-04-17 14:50:32.092691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.602 [2024-04-17 14:50:32.142943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.602 [2024-04-17 14:50:32.143203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:23.602 [2024-04-17 14:50:32.143310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.602 [2024-04-17 14:50:32.143347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.602 [2024-04-17 14:50:32.143459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.602 [2024-04-17 14:50:32.143515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:23.602 [2024-04-17 14:50:32.143643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.602 [2024-04-17 14:50:32.143745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.602 [2024-04-17 14:50:32.143821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.602 [2024-04-17 14:50:32.143854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:23.602 [2024-04-17 14:50:32.143884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.602 [2024-04-17 14:50:32.143914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.602 [2024-04-17 14:50:32.144066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.602 [2024-04-17 14:50:32.144103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:23.602 [2024-04-17 14:50:32.144133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.602 [2024-04-17 14:50:32.144186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.602 [2024-04-17 14:50:32.144248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.602 [2024-04-17 14:50:32.144285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:23.602 [2024-04-17 14:50:32.144367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.602 [2024-04-17 14:50:32.144399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.602 [2024-04-17 14:50:32.144460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.602 [2024-04-17 14:50:32.144495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:23.602 [2024-04-17 14:50:32.144604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.603 [2024-04-17 14:50:32.144651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.603 [2024-04-17 14:50:32.144775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.603 [2024-04-17 14:50:32.144812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:23.603 [2024-04-17 14:50:32.144845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.603 [2024-04-17 14:50:32.144877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.603 [2024-04-17 14:50:32.145031] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.732 ms, result 0 00:31:25.596 00:31:25.596 00:31:25.596 14:50:33 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:25.596 [2024-04-17 14:50:34.090155] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:31:25.596 [2024-04-17 14:50:34.090584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80386 ] 00:31:25.855 [2024-04-17 14:50:34.267479] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.113 [2024-04-17 14:50:34.567160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.682 [2024-04-17 14:50:35.026641] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:26.682 [2024-04-17 14:50:35.027009] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:26.682 [2024-04-17 14:50:35.189400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.189747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:26.682 [2024-04-17 14:50:35.189885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:26.682 [2024-04-17 14:50:35.189937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.190081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.190285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:26.682 [2024-04-17 14:50:35.190401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:26.682 [2024-04-17 14:50:35.190446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.190526] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:26.682 [2024-04-17 14:50:35.192255] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:26.682 [2024-04-17 14:50:35.192484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.192603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:26.682 [2024-04-17 14:50:35.192654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.963 ms 00:31:26.682 [2024-04-17 14:50:35.192695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.194671] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:26.682 [2024-04-17 14:50:35.220218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.220428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:26.682 [2024-04-17 14:50:35.220542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.547 ms 00:31:26.682 [2024-04-17 14:50:35.220586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.220698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.220807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:26.682 [2024-04-17 14:50:35.220855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:26.682 [2024-04-17 14:50:35.220891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.228682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.228883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:26.682 [2024-04-17 14:50:35.229000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.606 ms 00:31:26.682 [2024-04-17 14:50:35.229039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.229176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.229221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:26.682 [2024-04-17 14:50:35.229317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:31:26.682 [2024-04-17 14:50:35.229356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.229436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.229482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:26.682 [2024-04-17 14:50:35.229529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:26.682 [2024-04-17 14:50:35.229617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.229682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:26.682 [2024-04-17 14:50:35.236378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.236557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:26.682 [2024-04-17 14:50:35.236695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.703 ms 00:31:26.682 [2024-04-17 14:50:35.236737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.236854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.236902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:26.682 [2024-04-17 14:50:35.236940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:26.682 [2024-04-17 14:50:35.237018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.237125] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:26.682 [2024-04-17 14:50:35.237306] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:31:26.682 [2024-04-17 14:50:35.237403] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:26.682 [2024-04-17 14:50:35.237473] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:31:26.682 [2024-04-17 14:50:35.237731] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:31:26.682 [2024-04-17 14:50:35.237796] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:26.682 [2024-04-17 14:50:35.237855] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:31:26.682 [2024-04-17 14:50:35.237917] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:26.682 [2024-04-17 14:50:35.238088] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:26.682 [2024-04-17 14:50:35.238150] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:26.682 [2024-04-17 14:50:35.238185] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:26.682 [2024-04-17 14:50:35.238222] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:31:26.682 [2024-04-17 14:50:35.238256] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:31:26.682 [2024-04-17 14:50:35.238291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.238326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:26.682 [2024-04-17 14:50:35.238455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:31:26.682 [2024-04-17 14:50:35.238520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.238623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.682 [2024-04-17 14:50:35.238668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:26.682 [2024-04-17 14:50:35.238706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:26.682 [2024-04-17 14:50:35.238740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.682 [2024-04-17 14:50:35.238844] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:26.682 [2024-04-17 14:50:35.238903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:26.682 [2024-04-17 14:50:35.238968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:26.683 [2024-04-17 14:50:35.239004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:26.683 [2024-04-17 14:50:35.239039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:26.683 [2024-04-17 14:50:35.239073] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:26.683 [2024-04-17 14:50:35.239107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:26.683 [2024-04-17 14:50:35.239140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:26.683 [2024-04-17 14:50:35.239174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:26.683 [2024-04-17 14:50:35.239207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:26.683 [2024-04-17 14:50:35.239319] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:26.683 [2024-04-17 14:50:35.239385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:26.683 [2024-04-17 14:50:35.239437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:26.683 [2024-04-17 14:50:35.239473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:26.683 [2024-04-17 14:50:35.239518] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:31:26.683 [2024-04-17 14:50:35.239554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:26.683 [2024-04-17 14:50:35.239588] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:26.683 [2024-04-17 14:50:35.239621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:31:26.683 [2024-04-17 14:50:35.239727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:26.683 [2024-04-17 14:50:35.239770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:31:26.683 [2024-04-17 14:50:35.239805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:31:26.683 [2024-04-17 14:50:35.239839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:31:26.683 [2024-04-17 14:50:35.239873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:26.683 [2024-04-17 14:50:35.239906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:26.683 [2024-04-17 14:50:35.239972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:31:26.683 [2024-04-17 14:50:35.240005] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:26.683 [2024-04-17 14:50:35.240038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:31:26.683 [2024-04-17 14:50:35.240071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:31:26.683 [2024-04-17 14:50:35.240161] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:26.683 [2024-04-17 14:50:35.240203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:26.683 [2024-04-17 14:50:35.240241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:31:26.683 [2024-04-17 14:50:35.240276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:26.683 [2024-04-17 14:50:35.240331] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:31:26.683 [2024-04-17 14:50:35.240366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:31:26.683 [2024-04-17 14:50:35.240400] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:26.683 [2024-04-17 14:50:35.240473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:26.683 [2024-04-17 14:50:35.240584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:26.683 [2024-04-17 14:50:35.240626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:26.683 [2024-04-17 14:50:35.240716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:31:26.683 [2024-04-17 14:50:35.240756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:26.683 [2024-04-17 14:50:35.240828] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:26.683 [2024-04-17 14:50:35.240915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:26.683 [2024-04-17 14:50:35.240956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:26.683 [2024-04-17 14:50:35.241037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:26.683 [2024-04-17 14:50:35.241078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:26.683 [2024-04-17 14:50:35.241113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:26.683 [2024-04-17 14:50:35.241191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:26.683 [2024-04-17 14:50:35.241232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:26.683 [2024-04-17 14:50:35.241265] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:26.683 [2024-04-17 14:50:35.241337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:26.683 [2024-04-17 14:50:35.241415] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:26.683 [2024-04-17 14:50:35.241534] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:26.683 [2024-04-17 14:50:35.241696] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:26.683 [2024-04-17 14:50:35.241803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:31:26.683 [2024-04-17 14:50:35.241869] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:31:26.683 [2024-04-17 14:50:35.241991] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:31:26.683 [2024-04-17 14:50:35.242090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:31:26.683 [2024-04-17 14:50:35.242258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:31:26.683 [2024-04-17 14:50:35.242321] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:31:26.683 [2024-04-17 14:50:35.242485] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:31:26.683 [2024-04-17 14:50:35.242560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:31:26.683 [2024-04-17 14:50:35.242650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:31:26.683 [2024-04-17 14:50:35.242750] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:31:26.683 [2024-04-17 14:50:35.242852] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:31:26.683 [2024-04-17 14:50:35.242918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:31:26.683 [2024-04-17 14:50:35.243076] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:26.683 [2024-04-17 14:50:35.243185] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:26.683 [2024-04-17 14:50:35.243306] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:26.683 [2024-04-17 14:50:35.243365] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:26.683 [2024-04-17 14:50:35.243551] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:26.683 [2024-04-17 14:50:35.243611] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:26.683 [2024-04-17 14:50:35.243709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.683 [2024-04-17 14:50:35.243746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:26.683 [2024-04-17 14:50:35.243781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.906 ms 00:31:26.683 [2024-04-17 14:50:35.243816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.683 [2024-04-17 14:50:35.271720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.683 [2024-04-17 14:50:35.271910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:26.683 [2024-04-17 14:50:35.271998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.751 ms 00:31:26.683 [2024-04-17 14:50:35.272039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.683 [2024-04-17 14:50:35.272172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.683 [2024-04-17 14:50:35.272297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:26.684 [2024-04-17 14:50:35.272374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:26.684 [2024-04-17 14:50:35.272409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.943 [2024-04-17 14:50:35.344011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.943 [2024-04-17 14:50:35.344234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:26.943 [2024-04-17 14:50:35.344387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.484 ms 00:31:26.943 [2024-04-17 14:50:35.344439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.943 [2024-04-17 14:50:35.344538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.943 [2024-04-17 14:50:35.344613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:26.943 [2024-04-17 14:50:35.344683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:26.943 [2024-04-17 14:50:35.344716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.943 [2024-04-17 14:50:35.345242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.943 [2024-04-17 14:50:35.345359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:26.943 [2024-04-17 14:50:35.345439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:31:26.943 [2024-04-17 14:50:35.345476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.943 [2024-04-17 14:50:35.345735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.943 [2024-04-17 14:50:35.345778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:26.943 [2024-04-17 14:50:35.345814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:31:26.943 [2024-04-17 14:50:35.345848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.943 [2024-04-17 14:50:35.370597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.943 [2024-04-17 14:50:35.370855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:26.943 [2024-04-17 14:50:35.370992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.563 ms 00:31:26.943 [2024-04-17 14:50:35.371032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.943 [2024-04-17 14:50:35.394016] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:26.943 [2024-04-17 14:50:35.394292] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:26.943 [2024-04-17 14:50:35.394421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.943 [2024-04-17 14:50:35.394459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:26.943 [2024-04-17 14:50:35.394498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.211 ms 00:31:26.943 [2024-04-17 14:50:35.394554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.943 [2024-04-17 14:50:35.430965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.943 [2024-04-17 14:50:35.431256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:26.943 [2024-04-17 14:50:35.431367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.322 ms 00:31:26.943 [2024-04-17 14:50:35.431423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.943 [2024-04-17 14:50:35.455569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.944 [2024-04-17 14:50:35.455831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:26.944 [2024-04-17 14:50:35.455970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.009 ms 00:31:26.944 [2024-04-17 14:50:35.456027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.944 [2024-04-17 14:50:35.479953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.944 [2024-04-17 14:50:35.480214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:26.944 [2024-04-17 14:50:35.480309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.807 ms 00:31:26.944 [2024-04-17 14:50:35.480350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:26.944 [2024-04-17 14:50:35.480977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:26.944 [2024-04-17 14:50:35.481107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:26.944 [2024-04-17 14:50:35.481191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:31:26.944 [2024-04-17 14:50:35.481232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.202 [2024-04-17 14:50:35.591095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.202 [2024-04-17 14:50:35.591380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:27.202 [2024-04-17 14:50:35.591518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.803 ms 00:31:27.202 [2024-04-17 14:50:35.591566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.202 [2024-04-17 14:50:35.608728] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:27.202 [2024-04-17 14:50:35.612622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.202 [2024-04-17 14:50:35.612852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:27.202 [2024-04-17 14:50:35.612944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.958 ms 00:31:27.202 [2024-04-17 14:50:35.612988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.202 [2024-04-17 14:50:35.613140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.202 [2024-04-17 14:50:35.613240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:27.202 [2024-04-17 14:50:35.613288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:27.202 [2024-04-17 14:50:35.613322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.202 [2024-04-17 14:50:35.613424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.202 [2024-04-17 14:50:35.613464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:27.202 [2024-04-17 14:50:35.613525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:27.202 [2024-04-17 14:50:35.613591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.202 [2024-04-17 14:50:35.615990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.202 [2024-04-17 14:50:35.616126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:31:27.202 [2024-04-17 14:50:35.616208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:31:27.202 [2024-04-17 14:50:35.616245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.202 [2024-04-17 14:50:35.616306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.202 [2024-04-17 14:50:35.616388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:27.202 [2024-04-17 14:50:35.616426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:27.202 [2024-04-17 14:50:35.616458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.202 [2024-04-17 14:50:35.616592] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:27.202 [2024-04-17 14:50:35.616639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.202 [2024-04-17 14:50:35.616673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:27.202 [2024-04-17 14:50:35.616755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:27.202 [2024-04-17 14:50:35.616796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.202 [2024-04-17 14:50:35.660800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.202 [2024-04-17 14:50:35.661078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:27.203 [2024-04-17 14:50:35.661166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.951 ms 00:31:27.203 [2024-04-17 14:50:35.661205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.203 [2024-04-17 14:50:35.661323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.203 [2024-04-17 14:50:35.661373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:27.203 [2024-04-17 14:50:35.661480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:27.203 [2024-04-17 14:50:35.661549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.203 [2024-04-17 14:50:35.662870] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 472.931 ms, result 0 00:31:58.350  Copying: 35/1024 [MB] (35 MBps) Copying: 68/1024 [MB] (32 MBps) Copying: 101/1024 [MB] (33 MBps) Copying: 134/1024 [MB] (32 MBps) Copying: 167/1024 [MB] (33 MBps) Copying: 200/1024 [MB] (32 MBps) Copying: 236/1024 [MB] (36 MBps) Copying: 271/1024 [MB] (34 MBps) Copying: 304/1024 [MB] (33 MBps) Copying: 339/1024 [MB] (34 MBps) Copying: 373/1024 [MB] (34 MBps) Copying: 404/1024 [MB] (31 MBps) Copying: 436/1024 [MB] (31 MBps) Copying: 472/1024 [MB] (35 MBps) Copying: 502/1024 [MB] (30 MBps) Copying: 536/1024 [MB] (33 MBps) Copying: 572/1024 [MB] (35 MBps) Copying: 607/1024 [MB] (34 MBps) Copying: 642/1024 [MB] (35 MBps) Copying: 675/1024 [MB] (32 MBps) Copying: 707/1024 [MB] (32 MBps) Copying: 743/1024 [MB] (35 MBps) Copying: 777/1024 [MB] (33 MBps) Copying: 813/1024 [MB] (35 MBps) Copying: 846/1024 [MB] (33 MBps) Copying: 881/1024 [MB] (34 MBps) Copying: 913/1024 [MB] (32 MBps) Copying: 947/1024 [MB] (33 MBps) Copying: 977/1024 [MB] (30 MBps) Copying: 1008/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-04-17 14:51:06.896533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.350 [2024-04-17 14:51:06.896824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:58.350 [2024-04-17 14:51:06.896931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:58.350 [2024-04-17 14:51:06.896975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.350 [2024-04-17 14:51:06.897092] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:58.350 [2024-04-17 14:51:06.902552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.350 [2024-04-17 14:51:06.902733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:58.350 [2024-04-17 14:51:06.902826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.381 ms 00:31:58.350 [2024-04-17 14:51:06.902914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.350 [2024-04-17 14:51:06.903227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.350 [2024-04-17 14:51:06.903365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:58.350 [2024-04-17 14:51:06.903469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:31:58.350 [2024-04-17 14:51:06.903549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.350 [2024-04-17 14:51:06.907521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.350 [2024-04-17 14:51:06.908202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:58.350 [2024-04-17 14:51:06.908316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.441 ms 00:31:58.350 [2024-04-17 14:51:06.908361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.350 [2024-04-17 14:51:06.915168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.350 [2024-04-17 14:51:06.915328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:31:58.350 [2024-04-17 14:51:06.915412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.702 ms 00:31:58.350 [2024-04-17 14:51:06.915454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.610 [2024-04-17 14:51:06.964235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.610 [2024-04-17 14:51:06.964437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:58.610 [2024-04-17 14:51:06.964539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.592 ms 00:31:58.610 [2024-04-17 14:51:06.964582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.610 [2024-04-17 14:51:06.990730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.610 [2024-04-17 14:51:06.990951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:58.610 [2024-04-17 14:51:06.991042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.076 ms 00:31:58.610 [2024-04-17 14:51:06.991084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.610 [2024-04-17 14:51:06.991317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.610 [2024-04-17 14:51:06.991368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:58.610 [2024-04-17 14:51:06.991405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:31:58.610 [2024-04-17 14:51:06.991525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.610 [2024-04-17 14:51:07.040848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.610 [2024-04-17 14:51:07.041070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:58.610 [2024-04-17 14:51:07.041160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.279 ms 00:31:58.610 [2024-04-17 14:51:07.041283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.610 [2024-04-17 14:51:07.088675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.610 [2024-04-17 14:51:07.088886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:58.610 [2024-04-17 14:51:07.088974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.313 ms 00:31:58.610 [2024-04-17 14:51:07.089016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.610 [2024-04-17 14:51:07.136473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.610 [2024-04-17 14:51:07.136763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:58.610 [2024-04-17 14:51:07.136856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.384 ms 00:31:58.610 [2024-04-17 14:51:07.136900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.610 [2024-04-17 14:51:07.183901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.610 [2024-04-17 14:51:07.184156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:58.610 [2024-04-17 14:51:07.184268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.838 ms 00:31:58.610 [2024-04-17 14:51:07.184314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.610 [2024-04-17 14:51:07.184396] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:58.610 [2024-04-17 14:51:07.184443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.184601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.184663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.184720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.184866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.184996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.185937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.186098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.186165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.186230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.186295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.186430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.186508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.186571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.186628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:58.610 [2024-04-17 14:51:07.186750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.186809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.186865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.186922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.187977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.188982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.189975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.190090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.190175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.190264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.190319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.190432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.190566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.190624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.190704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.190786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.190872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.190953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.191064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.191127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.191193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.191283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.191339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.191438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.191575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.191635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.191766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.191829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.191922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.192041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.192114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.192240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.192304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.192361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:58.611 [2024-04-17 14:51:07.192453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:58.612 [2024-04-17 14:51:07.192528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:58.612 [2024-04-17 14:51:07.192738] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:58.612 [2024-04-17 14:51:07.192775] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef2062c2-7b02-4ec3-91e2-3a1b1495b72a 00:31:58.612 [2024-04-17 14:51:07.192831] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:58.612 [2024-04-17 14:51:07.192875] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:58.612 [2024-04-17 14:51:07.192910] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:58.612 [2024-04-17 14:51:07.192960] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:58.612 [2024-04-17 14:51:07.193050] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:58.612 [2024-04-17 14:51:07.193094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:58.612 [2024-04-17 14:51:07.193130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:58.612 [2024-04-17 14:51:07.193165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:58.612 [2024-04-17 14:51:07.193199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:58.612 [2024-04-17 14:51:07.193236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.612 [2024-04-17 14:51:07.193272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:58.612 [2024-04-17 14:51:07.193310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.842 ms 00:31:58.612 [2024-04-17 14:51:07.193420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.871 [2024-04-17 14:51:07.217917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.871 [2024-04-17 14:51:07.218148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:58.871 [2024-04-17 14:51:07.218239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.364 ms 00:31:58.871 [2024-04-17 14:51:07.218280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.871 [2024-04-17 14:51:07.218693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.871 [2024-04-17 14:51:07.218740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:58.871 [2024-04-17 14:51:07.218826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:31:58.871 [2024-04-17 14:51:07.218866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.871 [2024-04-17 14:51:07.284995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:58.871 [2024-04-17 14:51:07.285201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:58.871 [2024-04-17 14:51:07.285299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:58.871 [2024-04-17 14:51:07.285345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.871 [2024-04-17 14:51:07.285449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:58.871 [2024-04-17 14:51:07.285487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:58.871 [2024-04-17 14:51:07.285610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:58.871 [2024-04-17 14:51:07.285651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.871 [2024-04-17 14:51:07.285795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:58.871 [2024-04-17 14:51:07.285844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:58.871 [2024-04-17 14:51:07.285950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:58.871 [2024-04-17 14:51:07.285985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.871 [2024-04-17 14:51:07.286085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:58.871 [2024-04-17 14:51:07.286126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:58.871 [2024-04-17 14:51:07.286163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:58.871 [2024-04-17 14:51:07.286199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.871 [2024-04-17 14:51:07.420214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:58.871 [2024-04-17 14:51:07.420473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:58.871 [2024-04-17 14:51:07.420622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:58.871 [2024-04-17 14:51:07.420665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.161 [2024-04-17 14:51:07.474301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.161 [2024-04-17 14:51:07.474595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:59.161 [2024-04-17 14:51:07.474743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.161 [2024-04-17 14:51:07.474786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.161 [2024-04-17 14:51:07.474906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.161 [2024-04-17 14:51:07.475015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:59.161 [2024-04-17 14:51:07.475058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.161 [2024-04-17 14:51:07.475093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.161 [2024-04-17 14:51:07.475223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.161 [2024-04-17 14:51:07.475268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:59.161 [2024-04-17 14:51:07.475305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.161 [2024-04-17 14:51:07.475341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.161 [2024-04-17 14:51:07.475611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.161 [2024-04-17 14:51:07.475656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:59.161 [2024-04-17 14:51:07.475698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.161 [2024-04-17 14:51:07.475732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.161 [2024-04-17 14:51:07.475814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.161 [2024-04-17 14:51:07.475919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:59.161 [2024-04-17 14:51:07.475963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.161 [2024-04-17 14:51:07.476000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.161 [2024-04-17 14:51:07.476070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.161 [2024-04-17 14:51:07.476151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:59.161 [2024-04-17 14:51:07.476201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.161 [2024-04-17 14:51:07.476236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.161 [2024-04-17 14:51:07.476313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.161 [2024-04-17 14:51:07.476353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:59.161 [2024-04-17 14:51:07.476429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.161 [2024-04-17 14:51:07.476470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.161 [2024-04-17 14:51:07.476658] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 580.101 ms, result 0 00:32:00.538 00:32:00.538 00:32:00.538 14:51:09 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:02.511 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:02.511 14:51:11 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:02.769 [2024-04-17 14:51:11.162385] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:32:02.769 [2024-04-17 14:51:11.162833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80758 ] 00:32:02.769 [2024-04-17 14:51:11.332123] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.337 [2024-04-17 14:51:11.658218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.595 [2024-04-17 14:51:12.144424] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:03.595 [2024-04-17 14:51:12.144820] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:03.855 [2024-04-17 14:51:12.314570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.855 [2024-04-17 14:51:12.314885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:03.855 [2024-04-17 14:51:12.315042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:03.855 [2024-04-17 14:51:12.315181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.855 [2024-04-17 14:51:12.315339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.855 [2024-04-17 14:51:12.315444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:03.855 [2024-04-17 14:51:12.315584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:03.855 [2024-04-17 14:51:12.315639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.855 [2024-04-17 14:51:12.315761] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:03.855 [2024-04-17 14:51:12.317426] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:03.855 [2024-04-17 14:51:12.317608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.855 [2024-04-17 14:51:12.317696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:03.855 [2024-04-17 14:51:12.317739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:32:03.855 [2024-04-17 14:51:12.317813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.855 [2024-04-17 14:51:12.319487] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:03.855 [2024-04-17 14:51:12.341621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.855 [2024-04-17 14:51:12.341837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:03.855 [2024-04-17 14:51:12.341923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.133 ms 00:32:03.855 [2024-04-17 14:51:12.341975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.855 [2024-04-17 14:51:12.342069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.855 [2024-04-17 14:51:12.342111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:03.855 [2024-04-17 14:51:12.342197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:03.855 [2024-04-17 14:51:12.342235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.855 [2024-04-17 14:51:12.349817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.855 [2024-04-17 14:51:12.350046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:03.855 [2024-04-17 14:51:12.350171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.462 ms 00:32:03.855 [2024-04-17 14:51:12.350213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.855 [2024-04-17 14:51:12.350371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.855 [2024-04-17 14:51:12.350425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:03.855 [2024-04-17 14:51:12.350462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:32:03.855 [2024-04-17 14:51:12.350514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.855 [2024-04-17 14:51:12.350602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.855 [2024-04-17 14:51:12.350751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:03.855 [2024-04-17 14:51:12.350794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:03.855 [2024-04-17 14:51:12.350829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.855 [2024-04-17 14:51:12.350892] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:03.855 [2024-04-17 14:51:12.357260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.855 [2024-04-17 14:51:12.357403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:03.855 [2024-04-17 14:51:12.357532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.379 ms 00:32:03.855 [2024-04-17 14:51:12.357573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.855 [2024-04-17 14:51:12.357637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.855 [2024-04-17 14:51:12.357701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:03.855 [2024-04-17 14:51:12.357737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:03.855 [2024-04-17 14:51:12.357769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.855 [2024-04-17 14:51:12.357876] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:03.855 [2024-04-17 14:51:12.357930] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:32:03.855 [2024-04-17 14:51:12.358037] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:03.855 [2024-04-17 14:51:12.358100] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:32:03.855 [2024-04-17 14:51:12.358220] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:32:03.855 [2024-04-17 14:51:12.358280] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:03.856 [2024-04-17 14:51:12.358337] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:32:03.856 [2024-04-17 14:51:12.358525] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:03.856 [2024-04-17 14:51:12.358641] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:03.856 [2024-04-17 14:51:12.358775] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:03.856 [2024-04-17 14:51:12.358812] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:03.856 [2024-04-17 14:51:12.358846] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:32:03.856 [2024-04-17 14:51:12.358920] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:32:03.856 [2024-04-17 14:51:12.358956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.856 [2024-04-17 14:51:12.358991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:03.856 [2024-04-17 14:51:12.359029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:32:03.856 [2024-04-17 14:51:12.359063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.856 [2024-04-17 14:51:12.359184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.856 [2024-04-17 14:51:12.359225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:03.856 [2024-04-17 14:51:12.359295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:32:03.856 [2024-04-17 14:51:12.359329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.856 [2024-04-17 14:51:12.359433] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:03.856 [2024-04-17 14:51:12.359519] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:03.856 [2024-04-17 14:51:12.359557] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:03.856 [2024-04-17 14:51:12.359639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.856 [2024-04-17 14:51:12.359680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:03.856 [2024-04-17 14:51:12.359757] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:03.856 [2024-04-17 14:51:12.359796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:03.856 [2024-04-17 14:51:12.359862] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:03.856 [2024-04-17 14:51:12.359930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:03.856 [2024-04-17 14:51:12.359998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:03.856 [2024-04-17 14:51:12.360037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:03.856 [2024-04-17 14:51:12.360092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:03.856 [2024-04-17 14:51:12.360159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:03.856 [2024-04-17 14:51:12.360194] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:03.856 [2024-04-17 14:51:12.360228] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:32:03.856 [2024-04-17 14:51:12.360263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.856 [2024-04-17 14:51:12.360353] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:03.856 [2024-04-17 14:51:12.360395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:32:03.856 [2024-04-17 14:51:12.360429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.856 [2024-04-17 14:51:12.360520] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:32:03.856 [2024-04-17 14:51:12.360562] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:32:03.856 [2024-04-17 14:51:12.360598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:32:03.856 [2024-04-17 14:51:12.360632] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:03.856 [2024-04-17 14:51:12.360718] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:03.856 [2024-04-17 14:51:12.360758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:03.856 [2024-04-17 14:51:12.360793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:03.856 [2024-04-17 14:51:12.360828] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:32:03.856 [2024-04-17 14:51:12.360916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:03.856 [2024-04-17 14:51:12.360961] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:03.856 [2024-04-17 14:51:12.361008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:03.856 [2024-04-17 14:51:12.361053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:03.856 [2024-04-17 14:51:12.361099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:03.856 [2024-04-17 14:51:12.361282] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:32:03.856 [2024-04-17 14:51:12.361341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:03.856 [2024-04-17 14:51:12.361388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:03.856 [2024-04-17 14:51:12.361434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:03.856 [2024-04-17 14:51:12.361479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:03.856 [2024-04-17 14:51:12.361542] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:03.856 [2024-04-17 14:51:12.361590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:32:03.856 [2024-04-17 14:51:12.361635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:03.856 [2024-04-17 14:51:12.361679] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:03.856 [2024-04-17 14:51:12.361735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:03.856 [2024-04-17 14:51:12.361780] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:03.856 [2024-04-17 14:51:12.361831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.856 [2024-04-17 14:51:12.361877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:03.856 [2024-04-17 14:51:12.361923] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:03.856 [2024-04-17 14:51:12.361969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:03.856 [2024-04-17 14:51:12.362093] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:03.856 [2024-04-17 14:51:12.362144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:03.856 [2024-04-17 14:51:12.362191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:03.856 [2024-04-17 14:51:12.362238] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:03.856 [2024-04-17 14:51:12.362396] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:03.856 [2024-04-17 14:51:12.362579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:03.856 [2024-04-17 14:51:12.362654] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:32:03.856 [2024-04-17 14:51:12.362729] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:32:03.856 [2024-04-17 14:51:12.362898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:32:03.856 [2024-04-17 14:51:12.362959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:32:03.856 [2024-04-17 14:51:12.363015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:32:03.856 [2024-04-17 14:51:12.363126] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:32:03.856 [2024-04-17 14:51:12.363184] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:32:03.856 [2024-04-17 14:51:12.363241] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:32:03.856 [2024-04-17 14:51:12.363345] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:32:03.857 [2024-04-17 14:51:12.363404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:32:03.857 [2024-04-17 14:51:12.363461] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:32:03.857 [2024-04-17 14:51:12.363608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:32:03.857 [2024-04-17 14:51:12.363673] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:03.857 [2024-04-17 14:51:12.363731] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:03.857 [2024-04-17 14:51:12.363831] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:03.857 [2024-04-17 14:51:12.363925] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:03.857 [2024-04-17 14:51:12.363987] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:03.857 [2024-04-17 14:51:12.364086] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:03.857 [2024-04-17 14:51:12.364145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.857 [2024-04-17 14:51:12.364181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:03.857 [2024-04-17 14:51:12.364218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.753 ms 00:32:03.857 [2024-04-17 14:51:12.364253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.857 [2024-04-17 14:51:12.392313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.857 [2024-04-17 14:51:12.392551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:03.857 [2024-04-17 14:51:12.392648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.958 ms 00:32:03.857 [2024-04-17 14:51:12.392690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.857 [2024-04-17 14:51:12.392826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.857 [2024-04-17 14:51:12.392872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:03.857 [2024-04-17 14:51:12.392951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:03.857 [2024-04-17 14:51:12.392990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.116 [2024-04-17 14:51:12.462741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.116 [2024-04-17 14:51:12.462991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.116 [2024-04-17 14:51:12.463096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.635 ms 00:32:04.116 [2024-04-17 14:51:12.463147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.116 [2024-04-17 14:51:12.463244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.116 [2024-04-17 14:51:12.463283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.116 [2024-04-17 14:51:12.463398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:04.116 [2024-04-17 14:51:12.463440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.464044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.464213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.117 [2024-04-17 14:51:12.464373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:32:04.117 [2024-04-17 14:51:12.464432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.464667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.464725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.117 [2024-04-17 14:51:12.464821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:32:04.117 [2024-04-17 14:51:12.464872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.489569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.489802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.117 [2024-04-17 14:51:12.489949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.627 ms 00:32:04.117 [2024-04-17 14:51:12.489991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.512245] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:04.117 [2024-04-17 14:51:12.512521] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:04.117 [2024-04-17 14:51:12.512642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.512678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:04.117 [2024-04-17 14:51:12.512715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.473 ms 00:32:04.117 [2024-04-17 14:51:12.512747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.545271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.545476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:04.117 [2024-04-17 14:51:12.545568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.446 ms 00:32:04.117 [2024-04-17 14:51:12.545606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.567778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.568014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:04.117 [2024-04-17 14:51:12.568098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.090 ms 00:32:04.117 [2024-04-17 14:51:12.568151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.590375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.590618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:04.117 [2024-04-17 14:51:12.590712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.146 ms 00:32:04.117 [2024-04-17 14:51:12.590755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.591387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.591547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:04.117 [2024-04-17 14:51:12.591629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:32:04.117 [2024-04-17 14:51:12.591668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.695878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.696126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:04.117 [2024-04-17 14:51:12.696256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.160 ms 00:32:04.117 [2024-04-17 14:51:12.696298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.712575] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:04.117 [2024-04-17 14:51:12.716161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.716357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:04.117 [2024-04-17 14:51:12.716437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.772 ms 00:32:04.117 [2024-04-17 14:51:12.716476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.716631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.716709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:04.117 [2024-04-17 14:51:12.716751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:04.117 [2024-04-17 14:51:12.716782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.117 [2024-04-17 14:51:12.716882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.117 [2024-04-17 14:51:12.716919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:04.117 [2024-04-17 14:51:12.716952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:04.117 [2024-04-17 14:51:12.717052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.376 [2024-04-17 14:51:12.719489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.376 [2024-04-17 14:51:12.719622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:32:04.376 [2024-04-17 14:51:12.719749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.376 ms 00:32:04.376 [2024-04-17 14:51:12.719790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.376 [2024-04-17 14:51:12.719851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.376 [2024-04-17 14:51:12.719889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:04.376 [2024-04-17 14:51:12.719971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:04.376 [2024-04-17 14:51:12.720011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.376 [2024-04-17 14:51:12.720080] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:04.376 [2024-04-17 14:51:12.720119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.376 [2024-04-17 14:51:12.720200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:04.376 [2024-04-17 14:51:12.720245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:04.376 [2024-04-17 14:51:12.720279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.376 [2024-04-17 14:51:12.763200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.376 [2024-04-17 14:51:12.763450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:04.376 [2024-04-17 14:51:12.763563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.864 ms 00:32:04.376 [2024-04-17 14:51:12.763606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.376 [2024-04-17 14:51:12.763726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.376 [2024-04-17 14:51:12.763829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:04.376 [2024-04-17 14:51:12.763872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:04.376 [2024-04-17 14:51:12.763907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.376 [2024-04-17 14:51:12.765215] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 450.170 ms, result 0 00:32:35.783  Copying: 33/1024 [MB] (33 MBps) Copying: 64/1024 [MB] (30 MBps) Copying: 97/1024 [MB] (32 MBps) Copying: 130/1024 [MB] (33 MBps) Copying: 166/1024 [MB] (35 MBps) Copying: 202/1024 [MB] (35 MBps) Copying: 233/1024 [MB] (31 MBps) Copying: 267/1024 [MB] (34 MBps) Copying: 302/1024 [MB] (34 MBps) Copying: 338/1024 [MB] (35 MBps) Copying: 375/1024 [MB] (37 MBps) Copying: 411/1024 [MB] (35 MBps) Copying: 443/1024 [MB] (32 MBps) Copying: 476/1024 [MB] (32 MBps) Copying: 508/1024 [MB] (32 MBps) Copying: 542/1024 [MB] (33 MBps) Copying: 576/1024 [MB] (33 MBps) Copying: 610/1024 [MB] (34 MBps) Copying: 644/1024 [MB] (34 MBps) Copying: 677/1024 [MB] (32 MBps) Copying: 712/1024 [MB] (35 MBps) Copying: 748/1024 [MB] (35 MBps) Copying: 783/1024 [MB] (35 MBps) Copying: 814/1024 [MB] (30 MBps) Copying: 846/1024 [MB] (32 MBps) Copying: 878/1024 [MB] (31 MBps) Copying: 909/1024 [MB] (31 MBps) Copying: 941/1024 [MB] (32 MBps) Copying: 976/1024 [MB] (34 MBps) Copying: 1008/1024 [MB] (31 MBps) Copying: 1023/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-04-17 14:51:44.273727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.783 [2024-04-17 14:51:44.273931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:35.783 [2024-04-17 14:51:44.274029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:35.783 [2024-04-17 14:51:44.274069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.783 [2024-04-17 14:51:44.275025] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:35.783 [2024-04-17 14:51:44.283671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.783 [2024-04-17 14:51:44.283888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:35.783 [2024-04-17 14:51:44.284025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.390 ms 00:32:35.783 [2024-04-17 14:51:44.284093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.783 [2024-04-17 14:51:44.296084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.783 [2024-04-17 14:51:44.296289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:35.783 [2024-04-17 14:51:44.296397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.382 ms 00:32:35.783 [2024-04-17 14:51:44.296440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.783 [2024-04-17 14:51:44.319349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.783 [2024-04-17 14:51:44.319645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:35.783 [2024-04-17 14:51:44.319771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.831 ms 00:32:35.783 [2024-04-17 14:51:44.319871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.783 [2024-04-17 14:51:44.325953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.783 [2024-04-17 14:51:44.326141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:32:35.783 [2024-04-17 14:51:44.326226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.003 ms 00:32:35.783 [2024-04-17 14:51:44.326267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.783 [2024-04-17 14:51:44.374973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.783 [2024-04-17 14:51:44.375237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:35.783 [2024-04-17 14:51:44.375333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.610 ms 00:32:35.783 [2024-04-17 14:51:44.375376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.041 [2024-04-17 14:51:44.403303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.041 [2024-04-17 14:51:44.403608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:36.041 [2024-04-17 14:51:44.403783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.823 ms 00:32:36.041 [2024-04-17 14:51:44.403833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.041 [2024-04-17 14:51:44.485443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.041 [2024-04-17 14:51:44.485704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:36.041 [2024-04-17 14:51:44.485803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.502 ms 00:32:36.041 [2024-04-17 14:51:44.485850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.042 [2024-04-17 14:51:44.534468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.042 [2024-04-17 14:51:44.534744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:32:36.042 [2024-04-17 14:51:44.534841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.559 ms 00:32:36.042 [2024-04-17 14:51:44.534883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.042 [2024-04-17 14:51:44.581974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.042 [2024-04-17 14:51:44.582234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:32:36.042 [2024-04-17 14:51:44.582411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.002 ms 00:32:36.042 [2024-04-17 14:51:44.582456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.042 [2024-04-17 14:51:44.630705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.042 [2024-04-17 14:51:44.630945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:36.042 [2024-04-17 14:51:44.631043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.135 ms 00:32:36.042 [2024-04-17 14:51:44.631084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.301 [2024-04-17 14:51:44.677012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.301 [2024-04-17 14:51:44.677264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:36.301 [2024-04-17 14:51:44.677394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.757 ms 00:32:36.301 [2024-04-17 14:51:44.677438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.301 [2024-04-17 14:51:44.677529] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:36.301 [2024-04-17 14:51:44.677581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116224 / 261120 wr_cnt: 1 state: open 00:32:36.301 [2024-04-17 14:51:44.677702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.677760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.677815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.677911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.678926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.679908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.680894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.681056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.681118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.681175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.681230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.681286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.681459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.681531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.681589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.681645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:36.301 [2024-04-17 14:51:44.681701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.681866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.681928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.682016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.682073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.682173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.682243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.682333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.682442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.682612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.682671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.682728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.682784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.682841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.683922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.684963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.685016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.685069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.685120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.685213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.685267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.685319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.685371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.685524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.685584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:36.302 [2024-04-17 14:51:44.685649] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:36.302 [2024-04-17 14:51:44.685684] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef2062c2-7b02-4ec3-91e2-3a1b1495b72a 00:32:36.302 [2024-04-17 14:51:44.685795] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116224 00:32:36.302 [2024-04-17 14:51:44.685945] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117184 00:32:36.302 [2024-04-17 14:51:44.685985] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116224 00:32:36.302 [2024-04-17 14:51:44.686037] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:32:36.302 [2024-04-17 14:51:44.686072] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:36.302 [2024-04-17 14:51:44.686107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:36.302 [2024-04-17 14:51:44.686204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:36.302 [2024-04-17 14:51:44.686251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:36.302 [2024-04-17 14:51:44.686284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:36.302 [2024-04-17 14:51:44.686319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.302 [2024-04-17 14:51:44.686372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:36.302 [2024-04-17 14:51:44.686412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.792 ms 00:32:36.302 [2024-04-17 14:51:44.686447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.302 [2024-04-17 14:51:44.708209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.302 [2024-04-17 14:51:44.708369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:36.302 [2024-04-17 14:51:44.708532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.646 ms 00:32:36.302 [2024-04-17 14:51:44.708577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.302 [2024-04-17 14:51:44.708934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:36.302 [2024-04-17 14:51:44.708978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:36.302 [2024-04-17 14:51:44.709058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:32:36.302 [2024-04-17 14:51:44.709156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.302 [2024-04-17 14:51:44.769772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.302 [2024-04-17 14:51:44.770039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:36.302 [2024-04-17 14:51:44.770129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.302 [2024-04-17 14:51:44.770179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.302 [2024-04-17 14:51:44.770282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.302 [2024-04-17 14:51:44.770387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:36.302 [2024-04-17 14:51:44.770430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.302 [2024-04-17 14:51:44.770464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.302 [2024-04-17 14:51:44.770684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.302 [2024-04-17 14:51:44.770787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:36.302 [2024-04-17 14:51:44.770910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.302 [2024-04-17 14:51:44.770950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.302 [2024-04-17 14:51:44.771009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.302 [2024-04-17 14:51:44.771121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:36.302 [2024-04-17 14:51:44.771188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.302 [2024-04-17 14:51:44.771223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.561 [2024-04-17 14:51:44.903605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.561 [2024-04-17 14:51:44.903858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:36.561 [2024-04-17 14:51:44.903968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.561 [2024-04-17 14:51:44.904010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.561 [2024-04-17 14:51:44.957149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.561 [2024-04-17 14:51:44.957378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:36.561 [2024-04-17 14:51:44.957520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.561 [2024-04-17 14:51:44.957567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.561 [2024-04-17 14:51:44.957687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.561 [2024-04-17 14:51:44.957776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:36.561 [2024-04-17 14:51:44.957817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.561 [2024-04-17 14:51:44.957851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.561 [2024-04-17 14:51:44.957988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.561 [2024-04-17 14:51:44.958041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:36.561 [2024-04-17 14:51:44.958078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.561 [2024-04-17 14:51:44.958203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.561 [2024-04-17 14:51:44.958416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.561 [2024-04-17 14:51:44.958463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:36.561 [2024-04-17 14:51:44.958532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.561 [2024-04-17 14:51:44.958579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.561 [2024-04-17 14:51:44.958755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.561 [2024-04-17 14:51:44.958800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:36.561 [2024-04-17 14:51:44.958843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.561 [2024-04-17 14:51:44.958877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.561 [2024-04-17 14:51:44.958942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.561 [2024-04-17 14:51:44.958980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:36.561 [2024-04-17 14:51:44.959085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.561 [2024-04-17 14:51:44.959120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.561 [2024-04-17 14:51:44.959192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:36.561 [2024-04-17 14:51:44.959236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:36.561 [2024-04-17 14:51:44.959327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:36.561 [2024-04-17 14:51:44.959370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:36.561 [2024-04-17 14:51:44.959615] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 688.961 ms, result 0 00:32:38.461 00:32:38.461 00:32:38.461 14:51:46 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:38.461 [2024-04-17 14:51:46.923281] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:32:38.461 [2024-04-17 14:51:46.925425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81115 ] 00:32:38.718 [2024-04-17 14:51:47.108791] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.976 [2024-04-17 14:51:47.373932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.234 [2024-04-17 14:51:47.813503] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:39.234 [2024-04-17 14:51:47.813804] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:39.494 [2024-04-17 14:51:47.977443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:47.977731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:39.494 [2024-04-17 14:51:47.977838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:39.494 [2024-04-17 14:51:47.977882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:47.978041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:47.978154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:39.494 [2024-04-17 14:51:47.978237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:39.494 [2024-04-17 14:51:47.978277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:47.978454] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:39.494 [2024-04-17 14:51:47.979828] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:39.494 [2024-04-17 14:51:47.979994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:47.980091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:39.494 [2024-04-17 14:51:47.980132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:32:39.494 [2024-04-17 14:51:47.980204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:47.981795] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:39.494 [2024-04-17 14:51:48.003667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:48.003911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:39.494 [2024-04-17 14:51:48.003996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.870 ms 00:32:39.494 [2024-04-17 14:51:48.004033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:48.004153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:48.004245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:39.494 [2024-04-17 14:51:48.004283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:39.494 [2024-04-17 14:51:48.004313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:48.012210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:48.012445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:39.494 [2024-04-17 14:51:48.012577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.768 ms 00:32:39.494 [2024-04-17 14:51:48.012621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:48.012761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:48.012798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:39.494 [2024-04-17 14:51:48.012828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:32:39.494 [2024-04-17 14:51:48.012903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:48.013046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:48.013090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:39.494 [2024-04-17 14:51:48.013181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:39.494 [2024-04-17 14:51:48.013215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:48.013269] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:39.494 [2024-04-17 14:51:48.019464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:48.019649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:39.494 [2024-04-17 14:51:48.019748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:32:39.494 [2024-04-17 14:51:48.019786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:48.019918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:48.020011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:39.494 [2024-04-17 14:51:48.020082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:39.494 [2024-04-17 14:51:48.020119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:48.020220] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:39.494 [2024-04-17 14:51:48.020274] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:32:39.494 [2024-04-17 14:51:48.020481] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:39.494 [2024-04-17 14:51:48.020599] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:32:39.494 [2024-04-17 14:51:48.020739] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:32:39.494 [2024-04-17 14:51:48.020823] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:39.494 [2024-04-17 14:51:48.020875] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:32:39.494 [2024-04-17 14:51:48.020963] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:39.494 [2024-04-17 14:51:48.021022] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:39.494 [2024-04-17 14:51:48.021070] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:39.494 [2024-04-17 14:51:48.021109] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:39.494 [2024-04-17 14:51:48.021138] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:32:39.494 [2024-04-17 14:51:48.021199] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:32:39.494 [2024-04-17 14:51:48.021233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:48.021263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:39.494 [2024-04-17 14:51:48.021344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:32:39.494 [2024-04-17 14:51:48.021377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:48.021468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.494 [2024-04-17 14:51:48.021535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:39.494 [2024-04-17 14:51:48.021567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:32:39.494 [2024-04-17 14:51:48.021634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.494 [2024-04-17 14:51:48.021734] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:39.494 [2024-04-17 14:51:48.021891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:39.494 [2024-04-17 14:51:48.021930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:39.494 [2024-04-17 14:51:48.021963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.494 [2024-04-17 14:51:48.021996] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:39.494 [2024-04-17 14:51:48.022054] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:39.494 [2024-04-17 14:51:48.022087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:39.494 [2024-04-17 14:51:48.022119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:39.494 [2024-04-17 14:51:48.022150] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:39.494 [2024-04-17 14:51:48.022182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:39.495 [2024-04-17 14:51:48.022269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:39.495 [2024-04-17 14:51:48.022306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:39.495 [2024-04-17 14:51:48.022352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:39.495 [2024-04-17 14:51:48.022452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:39.495 [2024-04-17 14:51:48.022500] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:32:39.495 [2024-04-17 14:51:48.022536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.495 [2024-04-17 14:51:48.022603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:39.495 [2024-04-17 14:51:48.022704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:32:39.495 [2024-04-17 14:51:48.022742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.495 [2024-04-17 14:51:48.022774] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:32:39.495 [2024-04-17 14:51:48.022875] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:32:39.495 [2024-04-17 14:51:48.022911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:32:39.495 [2024-04-17 14:51:48.022943] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:39.495 [2024-04-17 14:51:48.022975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:39.495 [2024-04-17 14:51:48.023036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:39.495 [2024-04-17 14:51:48.023072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:39.495 [2024-04-17 14:51:48.023103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:32:39.495 [2024-04-17 14:51:48.023134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:39.495 [2024-04-17 14:51:48.023166] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:39.495 [2024-04-17 14:51:48.023197] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:39.495 [2024-04-17 14:51:48.023255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:39.495 [2024-04-17 14:51:48.023286] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:39.495 [2024-04-17 14:51:48.023317] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:32:39.495 [2024-04-17 14:51:48.023348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:32:39.495 [2024-04-17 14:51:48.023381] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:39.495 [2024-04-17 14:51:48.023461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:39.495 [2024-04-17 14:51:48.023550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:39.495 [2024-04-17 14:51:48.023617] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:39.495 [2024-04-17 14:51:48.023688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:32:39.495 [2024-04-17 14:51:48.023722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:39.495 [2024-04-17 14:51:48.023751] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:39.495 [2024-04-17 14:51:48.023865] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:39.495 [2024-04-17 14:51:48.023899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:39.495 [2024-04-17 14:51:48.023934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:39.495 [2024-04-17 14:51:48.023964] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:39.495 [2024-04-17 14:51:48.024005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:39.495 [2024-04-17 14:51:48.024034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:39.495 [2024-04-17 14:51:48.024063] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:39.495 [2024-04-17 14:51:48.024091] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:39.495 [2024-04-17 14:51:48.024121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:39.495 [2024-04-17 14:51:48.024152] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:39.495 [2024-04-17 14:51:48.024224] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:39.495 [2024-04-17 14:51:48.024273] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:39.495 [2024-04-17 14:51:48.024321] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:32:39.495 [2024-04-17 14:51:48.024413] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:32:39.495 [2024-04-17 14:51:48.024462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:32:39.495 [2024-04-17 14:51:48.024534] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:32:39.495 [2024-04-17 14:51:48.024581] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:32:39.495 [2024-04-17 14:51:48.024660] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:32:39.495 [2024-04-17 14:51:48.024742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:32:39.495 [2024-04-17 14:51:48.024828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:32:39.495 [2024-04-17 14:51:48.024877] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:32:39.495 [2024-04-17 14:51:48.024982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:32:39.495 [2024-04-17 14:51:48.025031] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:32:39.495 [2024-04-17 14:51:48.025077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:32:39.495 [2024-04-17 14:51:48.025196] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:39.495 [2024-04-17 14:51:48.025247] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:39.495 [2024-04-17 14:51:48.025360] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:39.495 [2024-04-17 14:51:48.025409] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:39.495 [2024-04-17 14:51:48.025515] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:39.495 [2024-04-17 14:51:48.025572] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:39.495 [2024-04-17 14:51:48.025717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.495 [2024-04-17 14:51:48.025751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:39.495 [2024-04-17 14:51:48.025782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.018 ms 00:32:39.495 [2024-04-17 14:51:48.025821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.495 [2024-04-17 14:51:48.052214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.495 [2024-04-17 14:51:48.052361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:39.495 [2024-04-17 14:51:48.052526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.301 ms 00:32:39.495 [2024-04-17 14:51:48.052627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.495 [2024-04-17 14:51:48.052745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.495 [2024-04-17 14:51:48.052799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:39.495 [2024-04-17 14:51:48.052877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:39.495 [2024-04-17 14:51:48.052914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.754 [2024-04-17 14:51:48.111592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.754 [2024-04-17 14:51:48.111792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:39.754 [2024-04-17 14:51:48.111908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.575 ms 00:32:39.754 [2024-04-17 14:51:48.111952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.754 [2024-04-17 14:51:48.112029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.754 [2024-04-17 14:51:48.112063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:39.754 [2024-04-17 14:51:48.112094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:39.754 [2024-04-17 14:51:48.112163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.754 [2024-04-17 14:51:48.112695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.754 [2024-04-17 14:51:48.112810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:39.754 [2024-04-17 14:51:48.112896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:32:39.754 [2024-04-17 14:51:48.112993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.754 [2024-04-17 14:51:48.113147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.755 [2024-04-17 14:51:48.113188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:39.755 [2024-04-17 14:51:48.113263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:32:39.755 [2024-04-17 14:51:48.113297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.755 [2024-04-17 14:51:48.137217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.755 [2024-04-17 14:51:48.137361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:39.755 [2024-04-17 14:51:48.137477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.872 ms 00:32:39.755 [2024-04-17 14:51:48.137535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.755 [2024-04-17 14:51:48.158991] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:39.755 [2024-04-17 14:51:48.159155] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:39.755 [2024-04-17 14:51:48.159269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.755 [2024-04-17 14:51:48.159304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:39.755 [2024-04-17 14:51:48.159370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.557 ms 00:32:39.755 [2024-04-17 14:51:48.159444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.755 [2024-04-17 14:51:48.192537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.755 [2024-04-17 14:51:48.192763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:39.755 [2024-04-17 14:51:48.192887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.002 ms 00:32:39.755 [2024-04-17 14:51:48.192954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.755 [2024-04-17 14:51:48.213851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.755 [2024-04-17 14:51:48.213994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:39.755 [2024-04-17 14:51:48.214120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.808 ms 00:32:39.755 [2024-04-17 14:51:48.214188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.755 [2024-04-17 14:51:48.234406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.755 [2024-04-17 14:51:48.234595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:39.755 [2024-04-17 14:51:48.234683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.131 ms 00:32:39.755 [2024-04-17 14:51:48.234789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.755 [2024-04-17 14:51:48.235396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.755 [2024-04-17 14:51:48.235536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:39.755 [2024-04-17 14:51:48.235618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:32:39.755 [2024-04-17 14:51:48.235687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.755 [2024-04-17 14:51:48.337512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.755 [2024-04-17 14:51:48.337742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:39.755 [2024-04-17 14:51:48.337857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.768 ms 00:32:39.755 [2024-04-17 14:51:48.337896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.755 [2024-04-17 14:51:48.353149] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:40.013 [2024-04-17 14:51:48.356781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.013 [2024-04-17 14:51:48.356946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:40.013 [2024-04-17 14:51:48.357064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.767 ms 00:32:40.013 [2024-04-17 14:51:48.357104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.013 [2024-04-17 14:51:48.357285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.013 [2024-04-17 14:51:48.357325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:40.013 [2024-04-17 14:51:48.357430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:40.013 [2024-04-17 14:51:48.357471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.013 [2024-04-17 14:51:48.359112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.013 [2024-04-17 14:51:48.359256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:40.013 [2024-04-17 14:51:48.359399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:32:40.013 [2024-04-17 14:51:48.359500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.013 [2024-04-17 14:51:48.361794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.013 [2024-04-17 14:51:48.361923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:32:40.013 [2024-04-17 14:51:48.362011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.222 ms 00:32:40.013 [2024-04-17 14:51:48.362107] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.013 [2024-04-17 14:51:48.362176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.013 [2024-04-17 14:51:48.362293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:40.013 [2024-04-17 14:51:48.362331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:40.013 [2024-04-17 14:51:48.362373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.013 [2024-04-17 14:51:48.362538] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:40.013 [2024-04-17 14:51:48.362635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.013 [2024-04-17 14:51:48.362675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:40.013 [2024-04-17 14:51:48.362750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:32:40.013 [2024-04-17 14:51:48.362846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.013 [2024-04-17 14:51:48.408129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.013 [2024-04-17 14:51:48.408379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:40.013 [2024-04-17 14:51:48.408558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.216 ms 00:32:40.013 [2024-04-17 14:51:48.408598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.013 [2024-04-17 14:51:48.408765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:40.013 [2024-04-17 14:51:48.408815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:40.013 [2024-04-17 14:51:48.408847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:40.013 [2024-04-17 14:51:48.408938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.013 [2024-04-17 14:51:48.414930] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 436.152 ms, result 0 00:33:11.382  Copying: 30/1024 [MB] (30 MBps) Copying: 62/1024 [MB] (31 MBps) Copying: 93/1024 [MB] (30 MBps) Copying: 123/1024 [MB] (29 MBps) Copying: 152/1024 [MB] (29 MBps) Copying: 183/1024 [MB] (30 MBps) Copying: 217/1024 [MB] (34 MBps) Copying: 250/1024 [MB] (33 MBps) Copying: 282/1024 [MB] (32 MBps) Copying: 315/1024 [MB] (32 MBps) Copying: 349/1024 [MB] (34 MBps) Copying: 383/1024 [MB] (33 MBps) Copying: 416/1024 [MB] (33 MBps) Copying: 449/1024 [MB] (32 MBps) Copying: 480/1024 [MB] (31 MBps) Copying: 512/1024 [MB] (31 MBps) Copying: 542/1024 [MB] (30 MBps) Copying: 577/1024 [MB] (35 MBps) Copying: 612/1024 [MB] (34 MBps) Copying: 648/1024 [MB] (36 MBps) Copying: 682/1024 [MB] (34 MBps) Copying: 718/1024 [MB] (35 MBps) Copying: 751/1024 [MB] (32 MBps) Copying: 783/1024 [MB] (32 MBps) Copying: 819/1024 [MB] (35 MBps) Copying: 852/1024 [MB] (33 MBps) Copying: 886/1024 [MB] (33 MBps) Copying: 919/1024 [MB] (32 MBps) Copying: 952/1024 [MB] (33 MBps) Copying: 986/1024 [MB] (33 MBps) Copying: 1021/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-04-17 14:52:19.951058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.382 [2024-04-17 14:52:19.951367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:11.382 [2024-04-17 14:52:19.951544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:11.382 [2024-04-17 14:52:19.951714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.382 [2024-04-17 14:52:19.951823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:11.382 [2024-04-17 14:52:19.956337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.382 [2024-04-17 14:52:19.956515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:11.382 [2024-04-17 14:52:19.956619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.356 ms 00:33:11.382 [2024-04-17 14:52:19.956670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.382 [2024-04-17 14:52:19.956958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.382 [2024-04-17 14:52:19.957018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:11.382 [2024-04-17 14:52:19.957062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:33:11.382 [2024-04-17 14:52:19.957163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.382 [2024-04-17 14:52:19.962813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.382 [2024-04-17 14:52:19.962997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:11.382 [2024-04-17 14:52:19.963110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.526 ms 00:33:11.382 [2024-04-17 14:52:19.963160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.382 [2024-04-17 14:52:19.969485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.382 [2024-04-17 14:52:19.969646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:33:11.382 [2024-04-17 14:52:19.969788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.244 ms 00:33:11.382 [2024-04-17 14:52:19.969840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.640 [2024-04-17 14:52:20.009205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.640 [2024-04-17 14:52:20.009428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:11.640 [2024-04-17 14:52:20.009563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.250 ms 00:33:11.640 [2024-04-17 14:52:20.009613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.640 [2024-04-17 14:52:20.032710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.640 [2024-04-17 14:52:20.032952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:11.640 [2024-04-17 14:52:20.033077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.011 ms 00:33:11.640 [2024-04-17 14:52:20.033143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.640 [2024-04-17 14:52:20.115695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.640 [2024-04-17 14:52:20.115929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:11.640 [2024-04-17 14:52:20.116034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.455 ms 00:33:11.640 [2024-04-17 14:52:20.116088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.640 [2024-04-17 14:52:20.157687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.640 [2024-04-17 14:52:20.157942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:33:11.640 [2024-04-17 14:52:20.158050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.536 ms 00:33:11.640 [2024-04-17 14:52:20.158099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.640 [2024-04-17 14:52:20.204262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.640 [2024-04-17 14:52:20.204523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:33:11.640 [2024-04-17 14:52:20.204638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.073 ms 00:33:11.640 [2024-04-17 14:52:20.204680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.900 [2024-04-17 14:52:20.252895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.900 [2024-04-17 14:52:20.253139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:11.900 [2024-04-17 14:52:20.253241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.115 ms 00:33:11.900 [2024-04-17 14:52:20.253283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.900 [2024-04-17 14:52:20.300392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.900 [2024-04-17 14:52:20.300626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:11.900 [2024-04-17 14:52:20.300723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.939 ms 00:33:11.900 [2024-04-17 14:52:20.300766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.900 [2024-04-17 14:52:20.300895] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:11.900 [2024-04-17 14:52:20.300951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:33:11.900 [2024-04-17 14:52:20.301131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.301191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.301249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.301359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.301416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.301472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.301586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.301648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.301705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.301803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.301860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.301953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.302056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.302148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.302247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.302309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.302414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.302576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.302636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.302693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.302797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.302856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.302971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.303028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.303120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.303224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.303322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.303423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.303523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.303626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.303687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.303822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.303921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:11.900 [2024-04-17 14:52:20.304980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.305877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.306060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.306246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.306327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.306448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.306565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.306624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.306752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.306809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.306865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.306967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.307965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.308946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.309002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.309118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.309204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.309258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.309313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.309367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.309520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.309576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:11.901 [2024-04-17 14:52:20.309640] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:11.901 [2024-04-17 14:52:20.309719] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef2062c2-7b02-4ec3-91e2-3a1b1495b72a 00:33:11.901 [2024-04-17 14:52:20.309779] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:33:11.901 [2024-04-17 14:52:20.309814] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 18368 00:33:11.901 [2024-04-17 14:52:20.309886] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 17408 00:33:11.901 [2024-04-17 14:52:20.309927] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0551 00:33:11.901 [2024-04-17 14:52:20.309979] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:11.901 [2024-04-17 14:52:20.310014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:11.901 [2024-04-17 14:52:20.310096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:11.901 [2024-04-17 14:52:20.310129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:11.901 [2024-04-17 14:52:20.310162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:11.901 [2024-04-17 14:52:20.310231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.901 [2024-04-17 14:52:20.310276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:11.901 [2024-04-17 14:52:20.310430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.337 ms 00:33:11.902 [2024-04-17 14:52:20.310471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.902 [2024-04-17 14:52:20.334515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.902 [2024-04-17 14:52:20.334722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:11.902 [2024-04-17 14:52:20.334813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.909 ms 00:33:11.902 [2024-04-17 14:52:20.334854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.902 [2024-04-17 14:52:20.335207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.902 [2024-04-17 14:52:20.335315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:11.902 [2024-04-17 14:52:20.335397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:33:11.902 [2024-04-17 14:52:20.335437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.902 [2024-04-17 14:52:20.399796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.902 [2024-04-17 14:52:20.400047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:11.902 [2024-04-17 14:52:20.400139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.902 [2024-04-17 14:52:20.400189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.902 [2024-04-17 14:52:20.400292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.902 [2024-04-17 14:52:20.400328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:11.902 [2024-04-17 14:52:20.400418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.902 [2024-04-17 14:52:20.400458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.902 [2024-04-17 14:52:20.400610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.902 [2024-04-17 14:52:20.400701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:11.902 [2024-04-17 14:52:20.400736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.902 [2024-04-17 14:52:20.400834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.902 [2024-04-17 14:52:20.400894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.902 [2024-04-17 14:52:20.400931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:11.902 [2024-04-17 14:52:20.401018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.902 [2024-04-17 14:52:20.401056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.160 [2024-04-17 14:52:20.536667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.160 [2024-04-17 14:52:20.536902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:12.160 [2024-04-17 14:52:20.536989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.160 [2024-04-17 14:52:20.537031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.160 [2024-04-17 14:52:20.590286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.160 [2024-04-17 14:52:20.590592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:12.160 [2024-04-17 14:52:20.590702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.160 [2024-04-17 14:52:20.590749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.160 [2024-04-17 14:52:20.590870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.160 [2024-04-17 14:52:20.590970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:12.160 [2024-04-17 14:52:20.591013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.160 [2024-04-17 14:52:20.591048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.160 [2024-04-17 14:52:20.591174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.160 [2024-04-17 14:52:20.591240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:12.160 [2024-04-17 14:52:20.591435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.160 [2024-04-17 14:52:20.591478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.160 [2024-04-17 14:52:20.591661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.160 [2024-04-17 14:52:20.591732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:12.160 [2024-04-17 14:52:20.591820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.160 [2024-04-17 14:52:20.591860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.160 [2024-04-17 14:52:20.591935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.160 [2024-04-17 14:52:20.592025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:12.160 [2024-04-17 14:52:20.592091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.160 [2024-04-17 14:52:20.592127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.160 [2024-04-17 14:52:20.592231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.160 [2024-04-17 14:52:20.592277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:12.160 [2024-04-17 14:52:20.592447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.160 [2024-04-17 14:52:20.592505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.160 [2024-04-17 14:52:20.592583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.160 [2024-04-17 14:52:20.592629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:12.160 [2024-04-17 14:52:20.592736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.160 [2024-04-17 14:52:20.592777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.160 [2024-04-17 14:52:20.592933] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 641.836 ms, result 0 00:33:13.537 00:33:13.537 00:33:13.537 14:52:22 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:16.067 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:16.067 14:52:24 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:16.067 14:52:24 -- ftl/restore.sh@85 -- # restore_kill 00:33:16.067 14:52:24 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:16.067 14:52:24 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:16.067 14:52:24 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:16.067 14:52:24 -- ftl/restore.sh@32 -- # killprocess 79769 00:33:16.067 14:52:24 -- common/autotest_common.sh@936 -- # '[' -z 79769 ']' 00:33:16.067 Process with pid 79769 is not found 00:33:16.067 14:52:24 -- common/autotest_common.sh@940 -- # kill -0 79769 00:33:16.067 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79769) - No such process 00:33:16.067 14:52:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79769 is not found' 00:33:16.067 Remove shared memory files 00:33:16.067 14:52:24 -- ftl/restore.sh@33 -- # remove_shm 00:33:16.067 14:52:24 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:16.067 14:52:24 -- ftl/common.sh@205 -- # rm -f rm -f 00:33:16.067 14:52:24 -- ftl/common.sh@206 -- # rm -f rm -f 00:33:16.067 14:52:24 -- ftl/common.sh@207 -- # rm -f rm -f 00:33:16.067 14:52:24 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:16.067 14:52:24 -- ftl/common.sh@209 -- # rm -f rm -f 00:33:16.067 ************************************ 00:33:16.067 END TEST ftl_restore 00:33:16.067 ************************************ 00:33:16.067 00:33:16.067 real 2m47.708s 00:33:16.067 user 2m33.767s 00:33:16.067 sys 0m16.299s 00:33:16.067 14:52:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:16.067 14:52:24 -- common/autotest_common.sh@10 -- # set +x 00:33:16.067 14:52:24 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:33:16.067 14:52:24 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:33:16.067 14:52:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:16.067 14:52:24 -- common/autotest_common.sh@10 -- # set +x 00:33:16.067 ************************************ 00:33:16.067 START TEST ftl_dirty_shutdown 00:33:16.067 ************************************ 00:33:16.067 14:52:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:33:16.067 * Looking for test storage... 00:33:16.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:16.067 14:52:24 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:33:16.067 14:52:24 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:16.067 14:52:24 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:16.067 14:52:24 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:16.067 14:52:24 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:16.067 14:52:24 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:16.067 14:52:24 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:16.067 14:52:24 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:16.067 14:52:24 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:16.067 14:52:24 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:16.067 14:52:24 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:16.067 14:52:24 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:16.067 14:52:24 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:16.067 14:52:24 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:16.067 14:52:24 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:16.067 14:52:24 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:16.067 14:52:24 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:16.067 14:52:24 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:16.067 14:52:24 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:16.067 14:52:24 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:16.067 14:52:24 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:16.067 14:52:24 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:16.067 14:52:24 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:16.067 14:52:24 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:16.067 14:52:24 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:16.067 14:52:24 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:16.067 14:52:24 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:16.067 14:52:24 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@45 -- # svcpid=81552 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81552 00:33:16.067 14:52:24 -- common/autotest_common.sh@817 -- # '[' -z 81552 ']' 00:33:16.067 14:52:24 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:33:16.067 14:52:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.067 14:52:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:16.067 14:52:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.067 14:52:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:16.067 14:52:24 -- common/autotest_common.sh@10 -- # set +x 00:33:16.326 [2024-04-17 14:52:24.755197] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:33:16.326 [2024-04-17 14:52:24.755614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81552 ] 00:33:16.585 [2024-04-17 14:52:24.933882] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.585 [2024-04-17 14:52:25.187012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.963 14:52:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:17.963 14:52:26 -- common/autotest_common.sh@850 -- # return 0 00:33:17.963 14:52:26 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:33:17.963 14:52:26 -- ftl/common.sh@54 -- # local name=nvme0 00:33:17.963 14:52:26 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:17.963 14:52:26 -- ftl/common.sh@56 -- # local size=103424 00:33:17.963 14:52:26 -- ftl/common.sh@59 -- # local base_bdev 00:33:17.963 14:52:26 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:18.222 14:52:26 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:33:18.222 14:52:26 -- ftl/common.sh@62 -- # local base_size 00:33:18.222 14:52:26 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:33:18.222 14:52:26 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:33:18.222 14:52:26 -- common/autotest_common.sh@1365 -- # local bdev_info 00:33:18.222 14:52:26 -- common/autotest_common.sh@1366 -- # local bs 00:33:18.222 14:52:26 -- common/autotest_common.sh@1367 -- # local nb 00:33:18.222 14:52:26 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:33:18.480 14:52:26 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:33:18.480 { 00:33:18.480 "name": "nvme0n1", 00:33:18.480 "aliases": [ 00:33:18.480 "6d1ff964-64d6-447a-8a7e-4d304abcd46a" 00:33:18.480 ], 00:33:18.480 "product_name": "NVMe disk", 00:33:18.480 "block_size": 4096, 00:33:18.480 "num_blocks": 1310720, 00:33:18.480 "uuid": "6d1ff964-64d6-447a-8a7e-4d304abcd46a", 00:33:18.480 "assigned_rate_limits": { 00:33:18.480 "rw_ios_per_sec": 0, 00:33:18.480 "rw_mbytes_per_sec": 0, 00:33:18.480 "r_mbytes_per_sec": 0, 00:33:18.480 "w_mbytes_per_sec": 0 00:33:18.480 }, 00:33:18.480 "claimed": true, 00:33:18.480 "claim_type": "read_many_write_one", 00:33:18.480 "zoned": false, 00:33:18.480 "supported_io_types": { 00:33:18.480 "read": true, 00:33:18.480 "write": true, 00:33:18.480 "unmap": true, 00:33:18.480 "write_zeroes": true, 00:33:18.480 "flush": true, 00:33:18.480 "reset": true, 00:33:18.480 "compare": true, 00:33:18.480 "compare_and_write": false, 00:33:18.480 "abort": true, 00:33:18.480 "nvme_admin": true, 00:33:18.480 "nvme_io": true 00:33:18.480 }, 00:33:18.480 "driver_specific": { 00:33:18.480 "nvme": [ 00:33:18.480 { 00:33:18.480 "pci_address": "0000:00:11.0", 00:33:18.480 "trid": { 00:33:18.480 "trtype": "PCIe", 00:33:18.480 "traddr": "0000:00:11.0" 00:33:18.480 }, 00:33:18.480 "ctrlr_data": { 00:33:18.480 "cntlid": 0, 00:33:18.480 "vendor_id": "0x1b36", 00:33:18.480 "model_number": "QEMU NVMe Ctrl", 00:33:18.480 "serial_number": "12341", 00:33:18.480 "firmware_revision": "8.0.0", 00:33:18.480 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:18.480 "oacs": { 00:33:18.480 "security": 0, 00:33:18.480 "format": 1, 00:33:18.480 "firmware": 0, 00:33:18.480 "ns_manage": 1 00:33:18.480 }, 00:33:18.480 "multi_ctrlr": false, 00:33:18.480 "ana_reporting": false 00:33:18.480 }, 00:33:18.480 "vs": { 00:33:18.480 "nvme_version": "1.4" 00:33:18.480 }, 00:33:18.480 "ns_data": { 00:33:18.480 "id": 1, 00:33:18.480 "can_share": false 00:33:18.480 } 00:33:18.480 } 00:33:18.480 ], 00:33:18.480 "mp_policy": "active_passive" 00:33:18.480 } 00:33:18.480 } 00:33:18.480 ]' 00:33:18.480 14:52:26 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:33:18.480 14:52:26 -- common/autotest_common.sh@1369 -- # bs=4096 00:33:18.480 14:52:26 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:33:18.480 14:52:26 -- common/autotest_common.sh@1370 -- # nb=1310720 00:33:18.480 14:52:26 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:33:18.480 14:52:26 -- common/autotest_common.sh@1374 -- # echo 5120 00:33:18.480 14:52:26 -- ftl/common.sh@63 -- # base_size=5120 00:33:18.480 14:52:26 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:33:18.480 14:52:26 -- ftl/common.sh@67 -- # clear_lvols 00:33:18.480 14:52:26 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:18.480 14:52:26 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:18.739 14:52:27 -- ftl/common.sh@28 -- # stores=d67ae1a5-b914-4937-89fe-9366d55647c5 00:33:18.739 14:52:27 -- ftl/common.sh@29 -- # for lvs in $stores 00:33:18.739 14:52:27 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d67ae1a5-b914-4937-89fe-9366d55647c5 00:33:18.998 14:52:27 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:33:19.256 14:52:27 -- ftl/common.sh@68 -- # lvs=d4b5f692-51df-439b-a95f-dc4caab03153 00:33:19.256 14:52:27 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d4b5f692-51df-439b-a95f-dc4caab03153 00:33:19.513 14:52:28 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=72374285-e659-454f-a053-ecd9dab60188 00:33:19.513 14:52:28 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:33:19.513 14:52:28 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 72374285-e659-454f-a053-ecd9dab60188 00:33:19.514 14:52:28 -- ftl/common.sh@35 -- # local name=nvc0 00:33:19.514 14:52:28 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:19.514 14:52:28 -- ftl/common.sh@37 -- # local base_bdev=72374285-e659-454f-a053-ecd9dab60188 00:33:19.514 14:52:28 -- ftl/common.sh@38 -- # local cache_size= 00:33:19.514 14:52:28 -- ftl/common.sh@41 -- # get_bdev_size 72374285-e659-454f-a053-ecd9dab60188 00:33:19.514 14:52:28 -- common/autotest_common.sh@1364 -- # local bdev_name=72374285-e659-454f-a053-ecd9dab60188 00:33:19.514 14:52:28 -- common/autotest_common.sh@1365 -- # local bdev_info 00:33:19.514 14:52:28 -- common/autotest_common.sh@1366 -- # local bs 00:33:19.514 14:52:28 -- common/autotest_common.sh@1367 -- # local nb 00:33:19.514 14:52:28 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 72374285-e659-454f-a053-ecd9dab60188 00:33:19.772 14:52:28 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:33:19.772 { 00:33:19.772 "name": "72374285-e659-454f-a053-ecd9dab60188", 00:33:19.772 "aliases": [ 00:33:19.772 "lvs/nvme0n1p0" 00:33:19.772 ], 00:33:19.772 "product_name": "Logical Volume", 00:33:19.772 "block_size": 4096, 00:33:19.772 "num_blocks": 26476544, 00:33:19.772 "uuid": "72374285-e659-454f-a053-ecd9dab60188", 00:33:19.772 "assigned_rate_limits": { 00:33:19.772 "rw_ios_per_sec": 0, 00:33:19.772 "rw_mbytes_per_sec": 0, 00:33:19.772 "r_mbytes_per_sec": 0, 00:33:19.772 "w_mbytes_per_sec": 0 00:33:19.772 }, 00:33:19.772 "claimed": false, 00:33:19.772 "zoned": false, 00:33:19.772 "supported_io_types": { 00:33:19.772 "read": true, 00:33:19.772 "write": true, 00:33:19.772 "unmap": true, 00:33:19.772 "write_zeroes": true, 00:33:19.772 "flush": false, 00:33:19.772 "reset": true, 00:33:19.772 "compare": false, 00:33:19.772 "compare_and_write": false, 00:33:19.772 "abort": false, 00:33:19.772 "nvme_admin": false, 00:33:19.772 "nvme_io": false 00:33:19.772 }, 00:33:19.772 "driver_specific": { 00:33:19.772 "lvol": { 00:33:19.772 "lvol_store_uuid": "d4b5f692-51df-439b-a95f-dc4caab03153", 00:33:19.772 "base_bdev": "nvme0n1", 00:33:19.772 "thin_provision": true, 00:33:19.772 "snapshot": false, 00:33:19.772 "clone": false, 00:33:19.772 "esnap_clone": false 00:33:19.772 } 00:33:19.772 } 00:33:19.772 } 00:33:19.772 ]' 00:33:19.772 14:52:28 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:33:19.772 14:52:28 -- common/autotest_common.sh@1369 -- # bs=4096 00:33:19.772 14:52:28 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:33:20.031 14:52:28 -- common/autotest_common.sh@1370 -- # nb=26476544 00:33:20.031 14:52:28 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:33:20.031 14:52:28 -- common/autotest_common.sh@1374 -- # echo 103424 00:33:20.031 14:52:28 -- ftl/common.sh@41 -- # local base_size=5171 00:33:20.031 14:52:28 -- ftl/common.sh@44 -- # local nvc_bdev 00:33:20.031 14:52:28 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:33:20.289 14:52:28 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:33:20.289 14:52:28 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:33:20.289 14:52:28 -- ftl/common.sh@48 -- # get_bdev_size 72374285-e659-454f-a053-ecd9dab60188 00:33:20.289 14:52:28 -- common/autotest_common.sh@1364 -- # local bdev_name=72374285-e659-454f-a053-ecd9dab60188 00:33:20.289 14:52:28 -- common/autotest_common.sh@1365 -- # local bdev_info 00:33:20.289 14:52:28 -- common/autotest_common.sh@1366 -- # local bs 00:33:20.289 14:52:28 -- common/autotest_common.sh@1367 -- # local nb 00:33:20.289 14:52:28 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 72374285-e659-454f-a053-ecd9dab60188 00:33:20.548 14:52:28 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:33:20.548 { 00:33:20.548 "name": "72374285-e659-454f-a053-ecd9dab60188", 00:33:20.548 "aliases": [ 00:33:20.548 "lvs/nvme0n1p0" 00:33:20.548 ], 00:33:20.548 "product_name": "Logical Volume", 00:33:20.548 "block_size": 4096, 00:33:20.548 "num_blocks": 26476544, 00:33:20.548 "uuid": "72374285-e659-454f-a053-ecd9dab60188", 00:33:20.548 "assigned_rate_limits": { 00:33:20.548 "rw_ios_per_sec": 0, 00:33:20.548 "rw_mbytes_per_sec": 0, 00:33:20.548 "r_mbytes_per_sec": 0, 00:33:20.548 "w_mbytes_per_sec": 0 00:33:20.548 }, 00:33:20.548 "claimed": false, 00:33:20.548 "zoned": false, 00:33:20.548 "supported_io_types": { 00:33:20.548 "read": true, 00:33:20.548 "write": true, 00:33:20.548 "unmap": true, 00:33:20.548 "write_zeroes": true, 00:33:20.548 "flush": false, 00:33:20.548 "reset": true, 00:33:20.548 "compare": false, 00:33:20.548 "compare_and_write": false, 00:33:20.548 "abort": false, 00:33:20.548 "nvme_admin": false, 00:33:20.548 "nvme_io": false 00:33:20.548 }, 00:33:20.548 "driver_specific": { 00:33:20.548 "lvol": { 00:33:20.548 "lvol_store_uuid": "d4b5f692-51df-439b-a95f-dc4caab03153", 00:33:20.548 "base_bdev": "nvme0n1", 00:33:20.548 "thin_provision": true, 00:33:20.548 "snapshot": false, 00:33:20.548 "clone": false, 00:33:20.548 "esnap_clone": false 00:33:20.548 } 00:33:20.548 } 00:33:20.548 } 00:33:20.548 ]' 00:33:20.548 14:52:28 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:33:20.548 14:52:28 -- common/autotest_common.sh@1369 -- # bs=4096 00:33:20.548 14:52:28 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:33:20.548 14:52:29 -- common/autotest_common.sh@1370 -- # nb=26476544 00:33:20.548 14:52:29 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:33:20.548 14:52:29 -- common/autotest_common.sh@1374 -- # echo 103424 00:33:20.548 14:52:29 -- ftl/common.sh@48 -- # cache_size=5171 00:33:20.548 14:52:29 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:33:20.806 14:52:29 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:33:20.806 14:52:29 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 72374285-e659-454f-a053-ecd9dab60188 00:33:20.806 14:52:29 -- common/autotest_common.sh@1364 -- # local bdev_name=72374285-e659-454f-a053-ecd9dab60188 00:33:20.806 14:52:29 -- common/autotest_common.sh@1365 -- # local bdev_info 00:33:20.806 14:52:29 -- common/autotest_common.sh@1366 -- # local bs 00:33:20.806 14:52:29 -- common/autotest_common.sh@1367 -- # local nb 00:33:20.806 14:52:29 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 72374285-e659-454f-a053-ecd9dab60188 00:33:21.065 14:52:29 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:33:21.065 { 00:33:21.065 "name": "72374285-e659-454f-a053-ecd9dab60188", 00:33:21.065 "aliases": [ 00:33:21.065 "lvs/nvme0n1p0" 00:33:21.065 ], 00:33:21.065 "product_name": "Logical Volume", 00:33:21.065 "block_size": 4096, 00:33:21.065 "num_blocks": 26476544, 00:33:21.065 "uuid": "72374285-e659-454f-a053-ecd9dab60188", 00:33:21.065 "assigned_rate_limits": { 00:33:21.065 "rw_ios_per_sec": 0, 00:33:21.065 "rw_mbytes_per_sec": 0, 00:33:21.065 "r_mbytes_per_sec": 0, 00:33:21.065 "w_mbytes_per_sec": 0 00:33:21.065 }, 00:33:21.065 "claimed": false, 00:33:21.065 "zoned": false, 00:33:21.065 "supported_io_types": { 00:33:21.065 "read": true, 00:33:21.065 "write": true, 00:33:21.065 "unmap": true, 00:33:21.065 "write_zeroes": true, 00:33:21.065 "flush": false, 00:33:21.065 "reset": true, 00:33:21.065 "compare": false, 00:33:21.065 "compare_and_write": false, 00:33:21.065 "abort": false, 00:33:21.065 "nvme_admin": false, 00:33:21.065 "nvme_io": false 00:33:21.065 }, 00:33:21.065 "driver_specific": { 00:33:21.065 "lvol": { 00:33:21.065 "lvol_store_uuid": "d4b5f692-51df-439b-a95f-dc4caab03153", 00:33:21.065 "base_bdev": "nvme0n1", 00:33:21.065 "thin_provision": true, 00:33:21.065 "snapshot": false, 00:33:21.065 "clone": false, 00:33:21.065 "esnap_clone": false 00:33:21.065 } 00:33:21.065 } 00:33:21.065 } 00:33:21.065 ]' 00:33:21.065 14:52:29 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:33:21.065 14:52:29 -- common/autotest_common.sh@1369 -- # bs=4096 00:33:21.065 14:52:29 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:33:21.065 14:52:29 -- common/autotest_common.sh@1370 -- # nb=26476544 00:33:21.065 14:52:29 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:33:21.065 14:52:29 -- common/autotest_common.sh@1374 -- # echo 103424 00:33:21.065 14:52:29 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:33:21.065 14:52:29 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 72374285-e659-454f-a053-ecd9dab60188 --l2p_dram_limit 10' 00:33:21.065 14:52:29 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:33:21.065 14:52:29 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:33:21.065 14:52:29 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:33:21.065 14:52:29 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 72374285-e659-454f-a053-ecd9dab60188 --l2p_dram_limit 10 -c nvc0n1p0 00:33:21.372 [2024-04-17 14:52:29.892763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.372 [2024-04-17 14:52:29.893045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:21.372 [2024-04-17 14:52:29.893209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:21.372 [2024-04-17 14:52:29.893262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.372 [2024-04-17 14:52:29.893377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.372 [2024-04-17 14:52:29.893561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:21.372 [2024-04-17 14:52:29.893662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:33:21.372 [2024-04-17 14:52:29.893702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.372 [2024-04-17 14:52:29.893795] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:21.372 [2024-04-17 14:52:29.895398] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:21.372 [2024-04-17 14:52:29.895606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.373 [2024-04-17 14:52:29.895722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:21.373 [2024-04-17 14:52:29.895772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.816 ms 00:33:21.373 [2024-04-17 14:52:29.895877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.373 [2024-04-17 14:52:29.896097] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6bac9abf-074b-45be-bea1-6bcd4aa06261 00:33:21.373 [2024-04-17 14:52:29.897614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.373 [2024-04-17 14:52:29.897781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:33:21.373 [2024-04-17 14:52:29.897936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:33:21.373 [2024-04-17 14:52:29.897989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.373 [2024-04-17 14:52:29.905833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.373 [2024-04-17 14:52:29.906074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:21.373 [2024-04-17 14:52:29.906186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.662 ms 00:33:21.373 [2024-04-17 14:52:29.906237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.373 [2024-04-17 14:52:29.906396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.373 [2024-04-17 14:52:29.906446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:21.373 [2024-04-17 14:52:29.906599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:33:21.373 [2024-04-17 14:52:29.906662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.373 [2024-04-17 14:52:29.906770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.373 [2024-04-17 14:52:29.906816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:21.373 [2024-04-17 14:52:29.906852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:21.373 [2024-04-17 14:52:29.906970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.373 [2024-04-17 14:52:29.907043] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:21.373 [2024-04-17 14:52:29.914422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.373 [2024-04-17 14:52:29.914609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:21.373 [2024-04-17 14:52:29.914746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.381 ms 00:33:21.373 [2024-04-17 14:52:29.914795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.373 [2024-04-17 14:52:29.914868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.373 [2024-04-17 14:52:29.914981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:21.373 [2024-04-17 14:52:29.915068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:21.373 [2024-04-17 14:52:29.915112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.373 [2024-04-17 14:52:29.915207] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:33:21.373 [2024-04-17 14:52:29.915364] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:33:21.373 [2024-04-17 14:52:29.915472] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:21.373 [2024-04-17 14:52:29.915551] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:33:21.373 [2024-04-17 14:52:29.915776] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:21.373 [2024-04-17 14:52:29.915921] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:21.373 [2024-04-17 14:52:29.915995] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:21.373 [2024-04-17 14:52:29.916206] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:21.373 [2024-04-17 14:52:29.916259] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:33:21.373 [2024-04-17 14:52:29.916295] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:33:21.373 [2024-04-17 14:52:29.916350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.373 [2024-04-17 14:52:29.916427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:21.373 [2024-04-17 14:52:29.916519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:33:21.373 [2024-04-17 14:52:29.916562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.373 [2024-04-17 14:52:29.916664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.373 [2024-04-17 14:52:29.916702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:21.373 [2024-04-17 14:52:29.916751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:33:21.373 [2024-04-17 14:52:29.916901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.373 [2024-04-17 14:52:29.917034] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:21.373 [2024-04-17 14:52:29.917173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:21.373 [2024-04-17 14:52:29.917226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:21.373 [2024-04-17 14:52:29.917262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:21.373 [2024-04-17 14:52:29.917410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:21.373 [2024-04-17 14:52:29.917457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:21.373 [2024-04-17 14:52:29.917505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:21.373 [2024-04-17 14:52:29.917627] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:21.373 [2024-04-17 14:52:29.917677] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:21.373 [2024-04-17 14:52:29.917713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:21.373 [2024-04-17 14:52:29.917806] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:21.373 [2024-04-17 14:52:29.917916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:21.373 [2024-04-17 14:52:29.918035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:21.373 [2024-04-17 14:52:29.918172] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:21.373 [2024-04-17 14:52:29.918231] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:33:21.373 [2024-04-17 14:52:29.918334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:21.373 [2024-04-17 14:52:29.918422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:21.373 [2024-04-17 14:52:29.918528] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:33:21.373 [2024-04-17 14:52:29.918659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:21.373 [2024-04-17 14:52:29.918724] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:33:21.373 [2024-04-17 14:52:29.918825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:33:21.373 [2024-04-17 14:52:29.918872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:33:21.373 [2024-04-17 14:52:29.918966] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:21.373 [2024-04-17 14:52:29.919092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:21.373 [2024-04-17 14:52:29.919194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:33:21.373 [2024-04-17 14:52:29.919293] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:21.373 [2024-04-17 14:52:29.919351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:33:21.373 [2024-04-17 14:52:29.919452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:33:21.373 [2024-04-17 14:52:29.919525] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:21.373 [2024-04-17 14:52:29.919568] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:21.373 [2024-04-17 14:52:29.919670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:33:21.373 [2024-04-17 14:52:29.919772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:21.373 [2024-04-17 14:52:29.919826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:33:21.373 [2024-04-17 14:52:29.919863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:33:21.373 [2024-04-17 14:52:29.920028] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:21.373 [2024-04-17 14:52:29.920081] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:21.373 [2024-04-17 14:52:29.920128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:21.373 [2024-04-17 14:52:29.920186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:21.373 [2024-04-17 14:52:29.920241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:33:21.373 [2024-04-17 14:52:29.920280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:21.373 [2024-04-17 14:52:29.920316] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:21.373 [2024-04-17 14:52:29.920352] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:21.373 [2024-04-17 14:52:29.920465] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:21.374 [2024-04-17 14:52:29.920527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:21.374 [2024-04-17 14:52:29.920569] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:21.374 [2024-04-17 14:52:29.920604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:21.374 [2024-04-17 14:52:29.920680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:21.374 [2024-04-17 14:52:29.920718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:21.374 [2024-04-17 14:52:29.920813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:21.374 [2024-04-17 14:52:29.920858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:21.374 [2024-04-17 14:52:29.920945] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:21.374 [2024-04-17 14:52:29.921022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:21.374 [2024-04-17 14:52:29.921085] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:21.374 [2024-04-17 14:52:29.921220] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:33:21.374 [2024-04-17 14:52:29.921288] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:33:21.374 [2024-04-17 14:52:29.921398] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:33:21.374 [2024-04-17 14:52:29.921558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:33:21.374 [2024-04-17 14:52:29.921727] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:33:21.374 [2024-04-17 14:52:29.921807] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:33:21.374 [2024-04-17 14:52:29.921984] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:33:21.374 [2024-04-17 14:52:29.922059] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:33:21.374 [2024-04-17 14:52:29.922197] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:33:21.374 [2024-04-17 14:52:29.922277] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:33:21.374 [2024-04-17 14:52:29.922422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:33:21.374 [2024-04-17 14:52:29.922571] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:33:21.374 [2024-04-17 14:52:29.922729] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:21.374 [2024-04-17 14:52:29.922867] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:21.374 [2024-04-17 14:52:29.923010] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:21.374 [2024-04-17 14:52:29.923094] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:21.374 [2024-04-17 14:52:29.923270] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:21.374 [2024-04-17 14:52:29.923352] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:21.374 [2024-04-17 14:52:29.923540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.374 [2024-04-17 14:52:29.923594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:21.374 [2024-04-17 14:52:29.923732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.552 ms 00:33:21.374 [2024-04-17 14:52:29.923784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.374 [2024-04-17 14:52:29.952482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.374 [2024-04-17 14:52:29.952741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:21.374 [2024-04-17 14:52:29.952893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.596 ms 00:33:21.374 [2024-04-17 14:52:29.952948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.374 [2024-04-17 14:52:29.953174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.374 [2024-04-17 14:52:29.953240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:21.374 [2024-04-17 14:52:29.953280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:33:21.374 [2024-04-17 14:52:29.953317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.633 [2024-04-17 14:52:30.015703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.633 [2024-04-17 14:52:30.015941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:21.633 [2024-04-17 14:52:30.016073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.226 ms 00:33:21.633 [2024-04-17 14:52:30.016197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.633 [2024-04-17 14:52:30.016288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.633 [2024-04-17 14:52:30.016357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:21.633 [2024-04-17 14:52:30.016457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:21.633 [2024-04-17 14:52:30.016612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.633 [2024-04-17 14:52:30.017193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.633 [2024-04-17 14:52:30.017333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:21.633 [2024-04-17 14:52:30.017441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:33:21.633 [2024-04-17 14:52:30.017608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.633 [2024-04-17 14:52:30.017808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.633 [2024-04-17 14:52:30.017869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:21.633 [2024-04-17 14:52:30.018026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:33:21.633 [2024-04-17 14:52:30.018088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.633 [2024-04-17 14:52:30.046666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.633 [2024-04-17 14:52:30.046911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:21.633 [2024-04-17 14:52:30.047049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.394 ms 00:33:21.633 [2024-04-17 14:52:30.047110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.633 [2024-04-17 14:52:30.064188] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:21.633 [2024-04-17 14:52:30.067911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.633 [2024-04-17 14:52:30.068073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:21.633 [2024-04-17 14:52:30.068198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.628 ms 00:33:21.633 [2024-04-17 14:52:30.068244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.633 [2024-04-17 14:52:30.145065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.633 [2024-04-17 14:52:30.145349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:33:21.633 [2024-04-17 14:52:30.145467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.739 ms 00:33:21.633 [2024-04-17 14:52:30.145539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.633 [2024-04-17 14:52:30.145637] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:33:21.633 [2024-04-17 14:52:30.145777] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:33:24.163 [2024-04-17 14:52:32.458176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.163 [2024-04-17 14:52:32.458689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:33:24.163 [2024-04-17 14:52:32.458867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2312.511 ms 00:33:24.163 [2024-04-17 14:52:32.458934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.163 [2024-04-17 14:52:32.459310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.163 [2024-04-17 14:52:32.459462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:24.163 [2024-04-17 14:52:32.459591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:33:24.163 [2024-04-17 14:52:32.459725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.163 [2024-04-17 14:52:32.508757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.163 [2024-04-17 14:52:32.509007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:33:24.163 [2024-04-17 14:52:32.509139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.869 ms 00:33:24.163 [2024-04-17 14:52:32.509197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.163 [2024-04-17 14:52:32.557551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.163 [2024-04-17 14:52:32.557792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:33:24.163 [2024-04-17 14:52:32.557959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.148 ms 00:33:24.163 [2024-04-17 14:52:32.558002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.163 [2024-04-17 14:52:32.558602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.163 [2024-04-17 14:52:32.558750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:24.163 [2024-04-17 14:52:32.558866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:33:24.163 [2024-04-17 14:52:32.558916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.163 [2024-04-17 14:52:32.673122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.163 [2024-04-17 14:52:32.673392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:33:24.163 [2024-04-17 14:52:32.673528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.083 ms 00:33:24.163 [2024-04-17 14:52:32.673581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.163 [2024-04-17 14:52:32.722748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.163 [2024-04-17 14:52:32.723024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:33:24.163 [2024-04-17 14:52:32.723149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.059 ms 00:33:24.163 [2024-04-17 14:52:32.723282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.163 [2024-04-17 14:52:32.725862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.163 [2024-04-17 14:52:32.726037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:33:24.163 [2024-04-17 14:52:32.726150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.461 ms 00:33:24.163 [2024-04-17 14:52:32.726246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.421 [2024-04-17 14:52:32.774190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.421 [2024-04-17 14:52:32.774463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:24.421 [2024-04-17 14:52:32.774653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.718 ms 00:33:24.421 [2024-04-17 14:52:32.774702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.421 [2024-04-17 14:52:32.774807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.421 [2024-04-17 14:52:32.774940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:24.421 [2024-04-17 14:52:32.774994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:24.421 [2024-04-17 14:52:32.775033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.421 [2024-04-17 14:52:32.775191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.421 [2024-04-17 14:52:32.775234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:24.421 [2024-04-17 14:52:32.775348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:24.421 [2024-04-17 14:52:32.775391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.421 [2024-04-17 14:52:32.776672] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2883.326 ms, result 0 00:33:24.421 { 00:33:24.421 "name": "ftl0", 00:33:24.421 "uuid": "6bac9abf-074b-45be-bea1-6bcd4aa06261" 00:33:24.421 } 00:33:24.421 14:52:32 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:33:24.422 14:52:32 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:33:24.680 14:52:33 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:33:24.680 14:52:33 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:33:24.680 14:52:33 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:33:24.938 /dev/nbd0 00:33:24.938 14:52:33 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:33:24.938 14:52:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:24.938 14:52:33 -- common/autotest_common.sh@855 -- # local i 00:33:24.938 14:52:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:24.938 14:52:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:24.938 14:52:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:24.938 14:52:33 -- common/autotest_common.sh@859 -- # break 00:33:24.938 14:52:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:24.938 14:52:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:24.938 14:52:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:33:24.938 1+0 records in 00:33:24.938 1+0 records out 00:33:24.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068799 s, 6.0 MB/s 00:33:24.938 14:52:33 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:33:24.938 14:52:33 -- common/autotest_common.sh@872 -- # size=4096 00:33:24.939 14:52:33 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:33:24.939 14:52:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:24.939 14:52:33 -- common/autotest_common.sh@875 -- # return 0 00:33:24.939 14:52:33 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:33:24.939 [2024-04-17 14:52:33.449619] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:33:24.939 [2024-04-17 14:52:33.449831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81694 ] 00:33:25.197 [2024-04-17 14:52:33.625553] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.456 [2024-04-17 14:52:33.969155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.121  Copying: 174/1024 [MB] (174 MBps) Copying: 355/1024 [MB] (181 MBps) Copying: 534/1024 [MB] (178 MBps) Copying: 711/1024 [MB] (177 MBps) Copying: 878/1024 [MB] (167 MBps) Copying: 1024/1024 [MB] (average 175 MBps) 00:33:33.121 00:33:33.121 14:52:41 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:35.703 14:52:43 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:33:35.703 [2024-04-17 14:52:43.908343] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:33:35.703 [2024-04-17 14:52:43.908810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81808 ] 00:33:35.703 [2024-04-17 14:52:44.100439] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.961 [2024-04-17 14:52:44.406381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.631  Copying: 19/1024 [MB] (19 MBps) Copying: 36/1024 [MB] (16 MBps) Copying: 54/1024 [MB] (18 MBps) Copying: 72/1024 [MB] (18 MBps) Copying: 90/1024 [MB] (18 MBps) Copying: 108/1024 [MB] (18 MBps) Copying: 128/1024 [MB] (19 MBps) Copying: 147/1024 [MB] (18 MBps) Copying: 166/1024 [MB] (19 MBps) Copying: 187/1024 [MB] (20 MBps) Copying: 205/1024 [MB] (17 MBps) Copying: 224/1024 [MB] (19 MBps) Copying: 243/1024 [MB] (19 MBps) Copying: 262/1024 [MB] (19 MBps) Copying: 283/1024 [MB] (20 MBps) Copying: 301/1024 [MB] (18 MBps) Copying: 322/1024 [MB] (20 MBps) Copying: 342/1024 [MB] (20 MBps) Copying: 360/1024 [MB] (17 MBps) Copying: 378/1024 [MB] (17 MBps) Copying: 394/1024 [MB] (16 MBps) Copying: 411/1024 [MB] (17 MBps) Copying: 428/1024 [MB] (16 MBps) Copying: 445/1024 [MB] (16 MBps) Copying: 463/1024 [MB] (18 MBps) Copying: 483/1024 [MB] (19 MBps) Copying: 502/1024 [MB] (18 MBps) Copying: 521/1024 [MB] (18 MBps) Copying: 540/1024 [MB] (19 MBps) Copying: 560/1024 [MB] (19 MBps) Copying: 580/1024 [MB] (20 MBps) Copying: 599/1024 [MB] (19 MBps) Copying: 619/1024 [MB] (19 MBps) Copying: 637/1024 [MB] (18 MBps) Copying: 657/1024 [MB] (20 MBps) Copying: 678/1024 [MB] (20 MBps) Copying: 699/1024 [MB] (20 MBps) Copying: 719/1024 [MB] (20 MBps) Copying: 739/1024 [MB] (19 MBps) Copying: 758/1024 [MB] (19 MBps) Copying: 778/1024 [MB] (19 MBps) Copying: 797/1024 [MB] (19 MBps) Copying: 817/1024 [MB] (20 MBps) Copying: 836/1024 [MB] (18 MBps) Copying: 856/1024 [MB] (20 MBps) Copying: 875/1024 [MB] (19 MBps) Copying: 895/1024 [MB] (19 MBps) Copying: 914/1024 [MB] (19 MBps) Copying: 934/1024 [MB] (19 MBps) Copying: 953/1024 [MB] (19 MBps) Copying: 974/1024 [MB] (20 MBps) Copying: 995/1024 [MB] (20 MBps) Copying: 1015/1024 [MB] (20 MBps) Copying: 1024/1024 [MB] (average 19 MBps) 00:34:31.631 00:34:31.631 14:53:39 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:34:31.631 14:53:39 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:34:31.631 14:53:40 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:34:31.890 [2024-04-17 14:53:40.265081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.890 [2024-04-17 14:53:40.265392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:31.890 [2024-04-17 14:53:40.265603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:31.890 [2024-04-17 14:53:40.265807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.890 [2024-04-17 14:53:40.265938] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:31.890 [2024-04-17 14:53:40.270855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.890 [2024-04-17 14:53:40.271030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:31.890 [2024-04-17 14:53:40.271188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.689 ms 00:34:31.890 [2024-04-17 14:53:40.271368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.890 [2024-04-17 14:53:40.273158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.890 [2024-04-17 14:53:40.273317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:31.890 [2024-04-17 14:53:40.273466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:34:31.890 [2024-04-17 14:53:40.273637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.890 [2024-04-17 14:53:40.289228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.890 [2024-04-17 14:53:40.289440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:31.890 [2024-04-17 14:53:40.289599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.468 ms 00:34:31.890 [2024-04-17 14:53:40.289721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.890 [2024-04-17 14:53:40.296035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.890 [2024-04-17 14:53:40.296225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:34:31.890 [2024-04-17 14:53:40.296397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.190 ms 00:34:31.890 [2024-04-17 14:53:40.296543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.890 [2024-04-17 14:53:40.344571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.890 [2024-04-17 14:53:40.344799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:31.890 [2024-04-17 14:53:40.344936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.797 ms 00:34:31.890 [2024-04-17 14:53:40.344998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.890 [2024-04-17 14:53:40.372782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.890 [2024-04-17 14:53:40.373005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:31.890 [2024-04-17 14:53:40.373140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.568 ms 00:34:31.890 [2024-04-17 14:53:40.373201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.890 [2024-04-17 14:53:40.373513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.890 [2024-04-17 14:53:40.373582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:31.890 [2024-04-17 14:53:40.373649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:34:31.890 [2024-04-17 14:53:40.373742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.890 [2024-04-17 14:53:40.423761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.890 [2024-04-17 14:53:40.424009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:34:31.890 [2024-04-17 14:53:40.424166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.923 ms 00:34:31.891 [2024-04-17 14:53:40.424231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:31.891 [2024-04-17 14:53:40.471791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:31.891 [2024-04-17 14:53:40.472032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:34:31.891 [2024-04-17 14:53:40.472178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.418 ms 00:34:31.891 [2024-04-17 14:53:40.472243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.151 [2024-04-17 14:53:40.520867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.151 [2024-04-17 14:53:40.521096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:32.151 [2024-04-17 14:53:40.521235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.483 ms 00:34:32.151 [2024-04-17 14:53:40.521296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.151 [2024-04-17 14:53:40.569147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.151 [2024-04-17 14:53:40.569364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:32.151 [2024-04-17 14:53:40.569511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.585 ms 00:34:32.151 [2024-04-17 14:53:40.569629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.151 [2024-04-17 14:53:40.569769] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:32.151 [2024-04-17 14:53:40.569923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.570117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.570298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.570429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.570551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.570793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.570962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.571135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.571266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.571425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.571598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.571772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.571982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.572183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.572353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.572506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.572662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.572825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.572992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.573181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.573346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.573454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.573577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.573750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.573890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.574048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.574208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.574383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.574583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.574726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.574880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.575051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.575206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.575379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.575480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.575601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.575774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.575905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.576061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.576175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.576261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:32.151 [2024-04-17 14:53:40.576473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.576646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.576840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.577007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.577190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.577389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.577588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.577800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.577982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.578161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.578354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.578586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.578756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.578926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.579114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.579333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.579563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.579721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.579922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.580094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.580271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.580471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.580675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.580853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.581036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.581219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.581398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.581586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.581791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.581961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.582153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.582316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.582484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.582594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.582703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.582843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.582929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.583979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.584057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.584134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.584232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.584385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.584528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.584640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.584742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.584851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.584951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.585023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:32.152 [2024-04-17 14:53:40.585124] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:32.152 [2024-04-17 14:53:40.585208] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6bac9abf-074b-45be-bea1-6bcd4aa06261 00:34:32.152 [2024-04-17 14:53:40.585321] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:32.152 [2024-04-17 14:53:40.585408] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:32.152 [2024-04-17 14:53:40.585450] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:32.152 [2024-04-17 14:53:40.585537] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:32.152 [2024-04-17 14:53:40.585583] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:32.152 [2024-04-17 14:53:40.585623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:32.152 [2024-04-17 14:53:40.585666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:32.152 [2024-04-17 14:53:40.585708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:32.152 [2024-04-17 14:53:40.585782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:32.152 [2024-04-17 14:53:40.585877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.152 [2024-04-17 14:53:40.585923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:32.152 [2024-04-17 14:53:40.586072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.110 ms 00:34:32.152 [2024-04-17 14:53:40.586118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.152 [2024-04-17 14:53:40.610546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.152 [2024-04-17 14:53:40.610744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:32.152 [2024-04-17 14:53:40.610836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.254 ms 00:34:32.152 [2024-04-17 14:53:40.610879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.152 [2024-04-17 14:53:40.611280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.152 [2024-04-17 14:53:40.611416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:32.152 [2024-04-17 14:53:40.611547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:34:32.153 [2024-04-17 14:53:40.611591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.153 [2024-04-17 14:53:40.691185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.153 [2024-04-17 14:53:40.691430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:32.153 [2024-04-17 14:53:40.691560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.153 [2024-04-17 14:53:40.691603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.153 [2024-04-17 14:53:40.691718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.153 [2024-04-17 14:53:40.691860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:32.153 [2024-04-17 14:53:40.691913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.153 [2024-04-17 14:53:40.691947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.153 [2024-04-17 14:53:40.692087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.153 [2024-04-17 14:53:40.692131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:32.153 [2024-04-17 14:53:40.692231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.153 [2024-04-17 14:53:40.692268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.153 [2024-04-17 14:53:40.692318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.153 [2024-04-17 14:53:40.692354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:32.153 [2024-04-17 14:53:40.692391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.153 [2024-04-17 14:53:40.692430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.412 [2024-04-17 14:53:40.830186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.412 [2024-04-17 14:53:40.830450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:32.412 [2024-04-17 14:53:40.830617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.412 [2024-04-17 14:53:40.830661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.412 [2024-04-17 14:53:40.885234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.412 [2024-04-17 14:53:40.885468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:32.412 [2024-04-17 14:53:40.885594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.412 [2024-04-17 14:53:40.885641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.412 [2024-04-17 14:53:40.885773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.412 [2024-04-17 14:53:40.885869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:32.412 [2024-04-17 14:53:40.885915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.412 [2024-04-17 14:53:40.885950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.412 [2024-04-17 14:53:40.886080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.412 [2024-04-17 14:53:40.886178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:32.412 [2024-04-17 14:53:40.886255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.412 [2024-04-17 14:53:40.886294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.412 [2024-04-17 14:53:40.886500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.412 [2024-04-17 14:53:40.886570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:32.412 [2024-04-17 14:53:40.886662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.412 [2024-04-17 14:53:40.886713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.412 [2024-04-17 14:53:40.886797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.412 [2024-04-17 14:53:40.886921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:32.412 [2024-04-17 14:53:40.886978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.412 [2024-04-17 14:53:40.887013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.412 [2024-04-17 14:53:40.887088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.412 [2024-04-17 14:53:40.887126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:32.412 [2024-04-17 14:53:40.887164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.413 [2024-04-17 14:53:40.887199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.413 [2024-04-17 14:53:40.887279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:32.413 [2024-04-17 14:53:40.887410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:32.413 [2024-04-17 14:53:40.887480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:32.413 [2024-04-17 14:53:40.887533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.413 [2024-04-17 14:53:40.887729] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 622.598 ms, result 0 00:34:32.413 true 00:34:32.413 14:53:40 -- ftl/dirty_shutdown.sh@83 -- # kill -9 81552 00:34:32.413 14:53:40 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81552 00:34:32.413 14:53:40 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:34:32.672 [2024-04-17 14:53:41.029348] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:34:32.672 [2024-04-17 14:53:41.029817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82379 ] 00:34:32.672 [2024-04-17 14:53:41.215531] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.930 [2024-04-17 14:53:41.475034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.499  Copying: 186/1024 [MB] (186 MBps) Copying: 371/1024 [MB] (184 MBps) Copying: 557/1024 [MB] (186 MBps) Copying: 736/1024 [MB] (179 MBps) Copying: 918/1024 [MB] (181 MBps) Copying: 1024/1024 [MB] (average 182 MBps) 00:34:40.499 00:34:40.499 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81552 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:34:40.499 14:53:48 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:40.499 [2024-04-17 14:53:49.041951] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:34:40.499 [2024-04-17 14:53:49.042308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82460 ] 00:34:40.759 [2024-04-17 14:53:49.210270] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.017 [2024-04-17 14:53:49.488099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.585 [2024-04-17 14:53:49.932534] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:41.585 [2024-04-17 14:53:49.932804] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:41.585 [2024-04-17 14:53:49.996759] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:41.585 [2024-04-17 14:53:49.997313] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:41.585 [2024-04-17 14:53:49.997756] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:41.844 [2024-04-17 14:53:50.240213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.240529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:41.844 [2024-04-17 14:53:50.240726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:41.844 [2024-04-17 14:53:50.240790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.240921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.241032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:41.844 [2024-04-17 14:53:50.241102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:34:41.844 [2024-04-17 14:53:50.241148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.241228] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:41.844 [2024-04-17 14:53:50.243541] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:41.844 [2024-04-17 14:53:50.243790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.243928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:41.844 [2024-04-17 14:53:50.243999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:34:41.844 [2024-04-17 14:53:50.244123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.245942] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:41.844 [2024-04-17 14:53:50.277463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.277708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:41.844 [2024-04-17 14:53:50.277843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.520 ms 00:34:41.844 [2024-04-17 14:53:50.277902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.278028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.278215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:41.844 [2024-04-17 14:53:50.278294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:41.844 [2024-04-17 14:53:50.278340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.286223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.286450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:41.844 [2024-04-17 14:53:50.286594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.686 ms 00:34:41.844 [2024-04-17 14:53:50.286652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.286835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.287015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:41.844 [2024-04-17 14:53:50.287087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:34:41.844 [2024-04-17 14:53:50.287133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.287231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.287284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:41.844 [2024-04-17 14:53:50.287330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:41.844 [2024-04-17 14:53:50.287431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.287554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:41.844 [2024-04-17 14:53:50.296566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.296739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:41.844 [2024-04-17 14:53:50.296922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.022 ms 00:34:41.844 [2024-04-17 14:53:50.297052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.297151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.297284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:41.844 [2024-04-17 14:53:50.297343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:41.844 [2024-04-17 14:53:50.297434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.297632] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:41.844 [2024-04-17 14:53:50.297781] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:34:41.844 [2024-04-17 14:53:50.297957] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:41.844 [2024-04-17 14:53:50.298112] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:34:41.844 [2024-04-17 14:53:50.298338] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:34:41.844 [2024-04-17 14:53:50.298518] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:41.844 [2024-04-17 14:53:50.298659] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:34:41.844 [2024-04-17 14:53:50.298793] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:41.844 [2024-04-17 14:53:50.298883] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:41.844 [2024-04-17 14:53:50.299035] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:41.844 [2024-04-17 14:53:50.299096] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:41.844 [2024-04-17 14:53:50.299142] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:34:41.844 [2024-04-17 14:53:50.299187] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:34:41.844 [2024-04-17 14:53:50.299233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.299280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:41.844 [2024-04-17 14:53:50.299401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:34:41.844 [2024-04-17 14:53:50.299455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.299608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.844 [2024-04-17 14:53:50.299695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:41.844 [2024-04-17 14:53:50.299743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:41.844 [2024-04-17 14:53:50.299852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.844 [2024-04-17 14:53:50.300005] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:41.844 [2024-04-17 14:53:50.300138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:41.844 [2024-04-17 14:53:50.300204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:41.844 [2024-04-17 14:53:50.300271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:41.844 [2024-04-17 14:53:50.300382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:41.844 [2024-04-17 14:53:50.300441] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:41.844 [2024-04-17 14:53:50.300501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:41.844 [2024-04-17 14:53:50.300552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:41.844 [2024-04-17 14:53:50.300613] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:41.844 [2024-04-17 14:53:50.300658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:41.844 [2024-04-17 14:53:50.300755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:41.845 [2024-04-17 14:53:50.300836] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:41.845 [2024-04-17 14:53:50.300900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:41.845 [2024-04-17 14:53:50.300970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:41.845 [2024-04-17 14:53:50.301016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:34:41.845 [2024-04-17 14:53:50.301061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:41.845 [2024-04-17 14:53:50.301105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:41.845 [2024-04-17 14:53:50.301150] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:34:41.845 [2024-04-17 14:53:50.301194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:41.845 [2024-04-17 14:53:50.301293] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:34:41.845 [2024-04-17 14:53:50.301358] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:34:41.845 [2024-04-17 14:53:50.301429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:34:41.845 [2024-04-17 14:53:50.301473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:41.845 [2024-04-17 14:53:50.301536] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:41.845 [2024-04-17 14:53:50.301582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:34:41.845 [2024-04-17 14:53:50.301628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:41.845 [2024-04-17 14:53:50.301748] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:34:41.845 [2024-04-17 14:53:50.301802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:34:41.845 [2024-04-17 14:53:50.301849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:41.845 [2024-04-17 14:53:50.301894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:41.845 [2024-04-17 14:53:50.301997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:34:41.845 [2024-04-17 14:53:50.302049] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:41.845 [2024-04-17 14:53:50.302095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:34:41.845 [2024-04-17 14:53:50.302139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:34:41.845 [2024-04-17 14:53:50.302184] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:41.845 [2024-04-17 14:53:50.302259] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:41.845 [2024-04-17 14:53:50.302303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:41.845 [2024-04-17 14:53:50.302347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:41.845 [2024-04-17 14:53:50.302411] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:34:41.845 [2024-04-17 14:53:50.302532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:41.845 [2024-04-17 14:53:50.302588] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:41.845 [2024-04-17 14:53:50.302691] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:41.845 [2024-04-17 14:53:50.302745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:41.845 [2024-04-17 14:53:50.302791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:41.845 [2024-04-17 14:53:50.302881] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:41.845 [2024-04-17 14:53:50.302935] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:41.845 [2024-04-17 14:53:50.303095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:41.845 [2024-04-17 14:53:50.303149] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:41.845 [2024-04-17 14:53:50.303195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:41.845 [2024-04-17 14:53:50.303240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:41.845 [2024-04-17 14:53:50.303287] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:41.845 [2024-04-17 14:53:50.303432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:41.845 [2024-04-17 14:53:50.303524] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:41.845 [2024-04-17 14:53:50.303600] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:34:41.845 [2024-04-17 14:53:50.303675] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:34:41.845 [2024-04-17 14:53:50.303826] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:34:41.845 [2024-04-17 14:53:50.303900] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:34:41.845 [2024-04-17 14:53:50.303974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:34:41.845 [2024-04-17 14:53:50.304047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:34:41.845 [2024-04-17 14:53:50.304215] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:34:41.845 [2024-04-17 14:53:50.304292] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:34:41.845 [2024-04-17 14:53:50.304367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:34:41.845 [2024-04-17 14:53:50.304440] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:34:41.845 [2024-04-17 14:53:50.304617] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:34:41.845 [2024-04-17 14:53:50.304696] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:34:41.845 [2024-04-17 14:53:50.304774] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:41.845 [2024-04-17 14:53:50.304905] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:41.845 [2024-04-17 14:53:50.305122] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:41.845 [2024-04-17 14:53:50.305203] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:41.845 [2024-04-17 14:53:50.305340] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:41.845 [2024-04-17 14:53:50.305428] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:41.845 [2024-04-17 14:53:50.305599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.845 [2024-04-17 14:53:50.305665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:41.845 [2024-04-17 14:53:50.305713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.651 ms 00:34:41.845 [2024-04-17 14:53:50.305759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.845 [2024-04-17 14:53:50.341683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.845 [2024-04-17 14:53:50.341921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:41.845 [2024-04-17 14:53:50.342041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.752 ms 00:34:41.845 [2024-04-17 14:53:50.342096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.845 [2024-04-17 14:53:50.342258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.845 [2024-04-17 14:53:50.342319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:41.845 [2024-04-17 14:53:50.342449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:34:41.845 [2024-04-17 14:53:50.342614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.845 [2024-04-17 14:53:50.435990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.845 [2024-04-17 14:53:50.436242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:41.845 [2024-04-17 14:53:50.436365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.231 ms 00:34:41.845 [2024-04-17 14:53:50.436421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.845 [2024-04-17 14:53:50.436545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.845 [2024-04-17 14:53:50.436666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:41.845 [2024-04-17 14:53:50.436723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:41.845 [2024-04-17 14:53:50.436770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.845 [2024-04-17 14:53:50.437423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.845 [2024-04-17 14:53:50.437601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:41.845 [2024-04-17 14:53:50.437752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:34:41.846 [2024-04-17 14:53:50.437882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.846 [2024-04-17 14:53:50.438104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.846 [2024-04-17 14:53:50.438191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:41.846 [2024-04-17 14:53:50.438301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:34:41.846 [2024-04-17 14:53:50.438357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.472775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.473000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:42.128 [2024-04-17 14:53:50.473122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.314 ms 00:34:42.128 [2024-04-17 14:53:50.473177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.496527] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:42.128 [2024-04-17 14:53:50.496718] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:42.128 [2024-04-17 14:53:50.496814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.496850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:42.128 [2024-04-17 14:53:50.496885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.413 ms 00:34:42.128 [2024-04-17 14:53:50.496916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.532279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.532459] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:42.128 [2024-04-17 14:53:50.532589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.290 ms 00:34:42.128 [2024-04-17 14:53:50.532642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.554713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.554886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:42.128 [2024-04-17 14:53:50.554990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.975 ms 00:34:42.128 [2024-04-17 14:53:50.555033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.576904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.577056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:42.128 [2024-04-17 14:53:50.577136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.801 ms 00:34:42.128 [2024-04-17 14:53:50.577173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.577816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.577941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:42.128 [2024-04-17 14:53:50.578025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:34:42.128 [2024-04-17 14:53:50.578066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.677964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.678177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:42.128 [2024-04-17 14:53:50.678276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.846 ms 00:34:42.128 [2024-04-17 14:53:50.678315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.692957] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:42.128 [2024-04-17 14:53:50.696742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.696956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:42.128 [2024-04-17 14:53:50.697115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.296 ms 00:34:42.128 [2024-04-17 14:53:50.697197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.697456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.697598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:42.128 [2024-04-17 14:53:50.697684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:42.128 [2024-04-17 14:53:50.697726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.697850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.697895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:42.128 [2024-04-17 14:53:50.697988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:34:42.128 [2024-04-17 14:53:50.698032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.700541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.700663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:34:42.128 [2024-04-17 14:53:50.700742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.456 ms 00:34:42.128 [2024-04-17 14:53:50.700865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.700933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.700969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:42.128 [2024-04-17 14:53:50.701040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:42.128 [2024-04-17 14:53:50.701083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.128 [2024-04-17 14:53:50.701147] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:42.128 [2024-04-17 14:53:50.701184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.128 [2024-04-17 14:53:50.701254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:42.128 [2024-04-17 14:53:50.701292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:42.128 [2024-04-17 14:53:50.701375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.402 [2024-04-17 14:53:50.746110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.402 [2024-04-17 14:53:50.746358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:42.402 [2024-04-17 14:53:50.746574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.672 ms 00:34:42.402 [2024-04-17 14:53:50.746628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.402 [2024-04-17 14:53:50.746758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.402 [2024-04-17 14:53:50.746810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:42.402 [2024-04-17 14:53:50.746960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:34:42.402 [2024-04-17 14:53:50.747032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.402 [2024-04-17 14:53:50.748489] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 507.697 ms, result 0 00:35:14.339  Copying: 33/1024 [MB] (33 MBps) Copying: 68/1024 [MB] (35 MBps) Copying: 102/1024 [MB] (33 MBps) Copying: 134/1024 [MB] (32 MBps) Copying: 167/1024 [MB] (32 MBps) Copying: 201/1024 [MB] (33 MBps) Copying: 231/1024 [MB] (30 MBps) Copying: 262/1024 [MB] (30 MBps) Copying: 292/1024 [MB] (30 MBps) Copying: 323/1024 [MB] (30 MBps) Copying: 355/1024 [MB] (32 MBps) Copying: 387/1024 [MB] (31 MBps) Copying: 418/1024 [MB] (31 MBps) Copying: 449/1024 [MB] (30 MBps) Copying: 482/1024 [MB] (32 MBps) Copying: 515/1024 [MB] (33 MBps) Copying: 546/1024 [MB] (30 MBps) Copying: 579/1024 [MB] (33 MBps) Copying: 615/1024 [MB] (35 MBps) Copying: 648/1024 [MB] (33 MBps) Copying: 685/1024 [MB] (36 MBps) Copying: 721/1024 [MB] (35 MBps) Copying: 753/1024 [MB] (32 MBps) Copying: 786/1024 [MB] (32 MBps) Copying: 820/1024 [MB] (34 MBps) Copying: 855/1024 [MB] (34 MBps) Copying: 888/1024 [MB] (32 MBps) Copying: 921/1024 [MB] (33 MBps) Copying: 957/1024 [MB] (36 MBps) Copying: 992/1024 [MB] (35 MBps) Copying: 1023/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-04-17 14:54:22.751809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.339 [2024-04-17 14:54:22.752037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:14.339 [2024-04-17 14:54:22.752140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:35:14.339 [2024-04-17 14:54:22.752183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.339 [2024-04-17 14:54:22.753135] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:14.339 [2024-04-17 14:54:22.768930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.339 [2024-04-17 14:54:22.769272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:14.339 [2024-04-17 14:54:22.769449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.115 ms 00:35:14.339 [2024-04-17 14:54:22.769590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.339 [2024-04-17 14:54:22.786698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.339 [2024-04-17 14:54:22.786976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:14.339 [2024-04-17 14:54:22.787127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.471 ms 00:35:14.339 [2024-04-17 14:54:22.787250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.339 [2024-04-17 14:54:22.812429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.339 [2024-04-17 14:54:22.812703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:14.339 [2024-04-17 14:54:22.812838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.097 ms 00:35:14.339 [2024-04-17 14:54:22.812897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.339 [2024-04-17 14:54:22.821288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.339 [2024-04-17 14:54:22.821486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:35:14.339 [2024-04-17 14:54:22.821611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.256 ms 00:35:14.339 [2024-04-17 14:54:22.821723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.339 [2024-04-17 14:54:22.884072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.339 [2024-04-17 14:54:22.884328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:14.339 [2024-04-17 14:54:22.884481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.209 ms 00:35:14.339 [2024-04-17 14:54:22.884556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.339 [2024-04-17 14:54:22.918732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.339 [2024-04-17 14:54:22.918962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:14.339 [2024-04-17 14:54:22.919075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.014 ms 00:35:14.339 [2024-04-17 14:54:22.919132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.598 [2024-04-17 14:54:22.995128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.598 [2024-04-17 14:54:22.995398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:14.598 [2024-04-17 14:54:22.995551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.883 ms 00:35:14.598 [2024-04-17 14:54:22.995670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.598 [2024-04-17 14:54:23.059954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.598 [2024-04-17 14:54:23.060186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:35:14.598 [2024-04-17 14:54:23.060344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.194 ms 00:35:14.598 [2024-04-17 14:54:23.060402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.598 [2024-04-17 14:54:23.106736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.598 [2024-04-17 14:54:23.106978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:35:14.598 [2024-04-17 14:54:23.107085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.213 ms 00:35:14.598 [2024-04-17 14:54:23.107134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.598 [2024-04-17 14:54:23.153169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.598 [2024-04-17 14:54:23.153409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:14.598 [2024-04-17 14:54:23.153578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.946 ms 00:35:14.598 [2024-04-17 14:54:23.153619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.857 [2024-04-17 14:54:23.202625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.857 [2024-04-17 14:54:23.202889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:14.857 [2024-04-17 14:54:23.203001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.817 ms 00:35:14.857 [2024-04-17 14:54:23.203052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.857 [2024-04-17 14:54:23.203147] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:14.857 [2024-04-17 14:54:23.203216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121856 / 261120 wr_cnt: 1 state: open 00:35:14.857 [2024-04-17 14:54:23.203298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.203365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.203503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.203576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.203666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.203777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.203885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.203969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.204038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.204140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.204208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.204336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.204409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.204522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.204594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.204791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.204915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.205079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.205151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.205327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.205452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.205547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.205671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.205742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.205852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.205965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.206093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.206205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.206415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.206506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.206631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.206703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.206769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.206922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.206995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.207187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.207261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.207327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:14.857 [2024-04-17 14:54:23.207482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.207620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.207695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.207805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.207874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.208013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.208084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.208167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.208277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.208389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.208458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.208546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.208728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.208800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.208963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.209087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.209160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.209274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.209344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.209457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.209541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.209662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.209779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.209969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.210092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.210167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.210281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.210352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.210515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.210637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.210800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.210921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.211034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.211153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.211281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.211395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.211526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.211675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.211800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.211917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.212966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.213068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.213131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.213210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.213307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.213422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.213521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.213796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.213970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.214142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:14.858 [2024-04-17 14:54:23.214326] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:14.858 [2024-04-17 14:54:23.214469] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6bac9abf-074b-45be-bea1-6bcd4aa06261 00:35:14.858 [2024-04-17 14:54:23.214640] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121856 00:35:14.858 [2024-04-17 14:54:23.214731] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122816 00:35:14.858 [2024-04-17 14:54:23.214838] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121856 00:35:14.858 [2024-04-17 14:54:23.214904] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:35:14.858 [2024-04-17 14:54:23.214977] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:14.859 [2024-04-17 14:54:23.215066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:14.859 [2024-04-17 14:54:23.215191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:14.859 [2024-04-17 14:54:23.215239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:14.859 [2024-04-17 14:54:23.215305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:14.859 [2024-04-17 14:54:23.215401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.859 [2024-04-17 14:54:23.215451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:14.859 [2024-04-17 14:54:23.215551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.255 ms 00:35:14.859 [2024-04-17 14:54:23.215606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.859 [2024-04-17 14:54:23.239901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.859 [2024-04-17 14:54:23.240150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:14.859 [2024-04-17 14:54:23.240241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.118 ms 00:35:14.859 [2024-04-17 14:54:23.240281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.859 [2024-04-17 14:54:23.240684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:14.859 [2024-04-17 14:54:23.240793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:14.859 [2024-04-17 14:54:23.240876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:35:14.859 [2024-04-17 14:54:23.240949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.859 [2024-04-17 14:54:23.304333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:14.859 [2024-04-17 14:54:23.304637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:14.859 [2024-04-17 14:54:23.304748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:14.859 [2024-04-17 14:54:23.304790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.859 [2024-04-17 14:54:23.304895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:14.859 [2024-04-17 14:54:23.304979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:14.859 [2024-04-17 14:54:23.305019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:14.859 [2024-04-17 14:54:23.305054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.859 [2024-04-17 14:54:23.305227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:14.859 [2024-04-17 14:54:23.305328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:14.859 [2024-04-17 14:54:23.305403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:14.859 [2024-04-17 14:54:23.305454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.859 [2024-04-17 14:54:23.305561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:14.859 [2024-04-17 14:54:23.305603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:14.859 [2024-04-17 14:54:23.305640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:14.859 [2024-04-17 14:54:23.305712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:14.859 [2024-04-17 14:54:23.438514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:14.859 [2024-04-17 14:54:23.438758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:14.859 [2024-04-17 14:54:23.438855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:14.859 [2024-04-17 14:54:23.438896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.117 [2024-04-17 14:54:23.492187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:15.117 [2024-04-17 14:54:23.492417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:15.117 [2024-04-17 14:54:23.492589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:15.117 [2024-04-17 14:54:23.492632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.117 [2024-04-17 14:54:23.492755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:15.117 [2024-04-17 14:54:23.492854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:15.117 [2024-04-17 14:54:23.492913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:15.117 [2024-04-17 14:54:23.492947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.117 [2024-04-17 14:54:23.493073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:15.117 [2024-04-17 14:54:23.493116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:15.117 [2024-04-17 14:54:23.493317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:15.117 [2024-04-17 14:54:23.493357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.117 [2024-04-17 14:54:23.493548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:15.118 [2024-04-17 14:54:23.493741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:15.118 [2024-04-17 14:54:23.493782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:15.118 [2024-04-17 14:54:23.493820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.118 [2024-04-17 14:54:23.493891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:15.118 [2024-04-17 14:54:23.493928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:15.118 [2024-04-17 14:54:23.494027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:15.118 [2024-04-17 14:54:23.494110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.118 [2024-04-17 14:54:23.494175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:15.118 [2024-04-17 14:54:23.494211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:15.118 [2024-04-17 14:54:23.494245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:15.118 [2024-04-17 14:54:23.494286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.118 [2024-04-17 14:54:23.494391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:15.118 [2024-04-17 14:54:23.494503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:15.118 [2024-04-17 14:54:23.494563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:15.118 [2024-04-17 14:54:23.494598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:15.118 [2024-04-17 14:54:23.494759] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 744.046 ms, result 0 00:35:17.669 00:35:17.669 00:35:17.669 14:54:25 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:35:19.595 14:54:27 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:19.595 [2024-04-17 14:54:27.927989] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:35:19.595 [2024-04-17 14:54:27.928422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82847 ] 00:35:19.595 [2024-04-17 14:54:28.107686] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.853 [2024-04-17 14:54:28.436425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.478 [2024-04-17 14:54:28.890445] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:20.478 [2024-04-17 14:54:28.890730] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:20.478 [2024-04-17 14:54:29.052708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.478 [2024-04-17 14:54:29.052985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:20.478 [2024-04-17 14:54:29.053149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:20.478 [2024-04-17 14:54:29.053202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.478 [2024-04-17 14:54:29.053362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.478 [2024-04-17 14:54:29.053413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:20.478 [2024-04-17 14:54:29.053457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:35:20.478 [2024-04-17 14:54:29.053528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.478 [2024-04-17 14:54:29.053594] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:20.478 [2024-04-17 14:54:29.055264] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:20.478 [2024-04-17 14:54:29.055464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.478 [2024-04-17 14:54:29.055640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:20.478 [2024-04-17 14:54:29.055694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:35:20.478 [2024-04-17 14:54:29.055736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.478 [2024-04-17 14:54:29.057551] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:20.757 [2024-04-17 14:54:29.082226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.757 [2024-04-17 14:54:29.082463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:20.757 [2024-04-17 14:54:29.082599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.675 ms 00:35:20.757 [2024-04-17 14:54:29.082650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.757 [2024-04-17 14:54:29.082757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.757 [2024-04-17 14:54:29.082872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:20.757 [2024-04-17 14:54:29.082922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:35:20.757 [2024-04-17 14:54:29.082963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.757 [2024-04-17 14:54:29.090462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.757 [2024-04-17 14:54:29.090658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:20.757 [2024-04-17 14:54:29.090760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.319 ms 00:35:20.757 [2024-04-17 14:54:29.090810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.757 [2024-04-17 14:54:29.090968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.757 [2024-04-17 14:54:29.091023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:20.757 [2024-04-17 14:54:29.091096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:35:20.757 [2024-04-17 14:54:29.091136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.757 [2024-04-17 14:54:29.091220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.757 [2024-04-17 14:54:29.091272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:20.757 [2024-04-17 14:54:29.091314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:20.757 [2024-04-17 14:54:29.091455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.757 [2024-04-17 14:54:29.091550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:20.757 [2024-04-17 14:54:29.098788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.758 [2024-04-17 14:54:29.098943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:20.758 [2024-04-17 14:54:29.099033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.247 ms 00:35:20.758 [2024-04-17 14:54:29.099075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.758 [2024-04-17 14:54:29.099146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.758 [2024-04-17 14:54:29.099186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:20.758 [2024-04-17 14:54:29.099278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:35:20.758 [2024-04-17 14:54:29.099350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.758 [2024-04-17 14:54:29.099445] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:20.758 [2024-04-17 14:54:29.099513] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:35:20.758 [2024-04-17 14:54:29.099614] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:20.758 [2024-04-17 14:54:29.099733] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:35:20.758 [2024-04-17 14:54:29.099858] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:35:20.758 [2024-04-17 14:54:29.099916] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:20.758 [2024-04-17 14:54:29.100033] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:35:20.758 [2024-04-17 14:54:29.100120] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:20.758 [2024-04-17 14:54:29.100180] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:20.758 [2024-04-17 14:54:29.100235] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:20.758 [2024-04-17 14:54:29.100270] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:20.758 [2024-04-17 14:54:29.100303] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:35:20.758 [2024-04-17 14:54:29.100414] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:35:20.758 [2024-04-17 14:54:29.100481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.758 [2024-04-17 14:54:29.100533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:20.758 [2024-04-17 14:54:29.100568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:35:20.758 [2024-04-17 14:54:29.100602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.758 [2024-04-17 14:54:29.100700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.758 [2024-04-17 14:54:29.100740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:20.758 [2024-04-17 14:54:29.100841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:35:20.758 [2024-04-17 14:54:29.100905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.758 [2024-04-17 14:54:29.101008] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:20.758 [2024-04-17 14:54:29.101047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:20.758 [2024-04-17 14:54:29.101082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:20.758 [2024-04-17 14:54:29.101117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:20.758 [2024-04-17 14:54:29.101228] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:20.758 [2024-04-17 14:54:29.101271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:20.758 [2024-04-17 14:54:29.101306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:20.758 [2024-04-17 14:54:29.101340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:20.758 [2024-04-17 14:54:29.101375] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:20.758 [2024-04-17 14:54:29.101408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:20.758 [2024-04-17 14:54:29.101514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:20.758 [2024-04-17 14:54:29.101554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:20.758 [2024-04-17 14:54:29.101602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:20.758 [2024-04-17 14:54:29.101636] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:20.758 [2024-04-17 14:54:29.101738] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:35:20.758 [2024-04-17 14:54:29.101779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:20.758 [2024-04-17 14:54:29.101861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:20.758 [2024-04-17 14:54:29.101901] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:35:20.758 [2024-04-17 14:54:29.101935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:20.758 [2024-04-17 14:54:29.102003] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:35:20.758 [2024-04-17 14:54:29.102041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:35:20.758 [2024-04-17 14:54:29.102075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:35:20.758 [2024-04-17 14:54:29.102164] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:20.758 [2024-04-17 14:54:29.102228] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:20.758 [2024-04-17 14:54:29.102261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:35:20.758 [2024-04-17 14:54:29.102295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:20.758 [2024-04-17 14:54:29.102328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:35:20.758 [2024-04-17 14:54:29.102373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:35:20.758 [2024-04-17 14:54:29.102444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:20.758 [2024-04-17 14:54:29.102518] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:20.758 [2024-04-17 14:54:29.102613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:35:20.758 [2024-04-17 14:54:29.102675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:20.758 [2024-04-17 14:54:29.102711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:35:20.758 [2024-04-17 14:54:29.102745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:35:20.758 [2024-04-17 14:54:29.102780] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:20.758 [2024-04-17 14:54:29.102815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:20.758 [2024-04-17 14:54:29.102850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:20.758 [2024-04-17 14:54:29.102892] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:20.758 [2024-04-17 14:54:29.102972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:35:20.758 [2024-04-17 14:54:29.103122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:20.758 [2024-04-17 14:54:29.103163] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:20.758 [2024-04-17 14:54:29.103211] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:20.758 [2024-04-17 14:54:29.103246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:20.758 [2024-04-17 14:54:29.103286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:20.758 [2024-04-17 14:54:29.103378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:20.758 [2024-04-17 14:54:29.103420] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:20.758 [2024-04-17 14:54:29.103455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:20.758 [2024-04-17 14:54:29.103512] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:20.758 [2024-04-17 14:54:29.103550] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:20.758 [2024-04-17 14:54:29.103584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:20.758 [2024-04-17 14:54:29.103670] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:20.758 [2024-04-17 14:54:29.103734] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:20.758 [2024-04-17 14:54:29.103790] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:20.758 [2024-04-17 14:54:29.103844] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:35:20.759 [2024-04-17 14:54:29.103956] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:35:20.759 [2024-04-17 14:54:29.104010] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:35:20.759 [2024-04-17 14:54:29.104108] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:35:20.759 [2024-04-17 14:54:29.104168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:35:20.759 [2024-04-17 14:54:29.104265] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:35:20.759 [2024-04-17 14:54:29.104325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:35:20.759 [2024-04-17 14:54:29.104379] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:35:20.759 [2024-04-17 14:54:29.104466] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:35:20.759 [2024-04-17 14:54:29.104545] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:35:20.759 [2024-04-17 14:54:29.104651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:35:20.759 [2024-04-17 14:54:29.104711] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:35:20.759 [2024-04-17 14:54:29.104766] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:20.759 [2024-04-17 14:54:29.104858] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:20.759 [2024-04-17 14:54:29.104915] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:20.759 [2024-04-17 14:54:29.105076] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:20.759 [2024-04-17 14:54:29.105131] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:20.759 [2024-04-17 14:54:29.105186] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:20.759 [2024-04-17 14:54:29.105241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.105276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:20.759 [2024-04-17 14:54:29.105312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.275 ms 00:35:20.759 [2024-04-17 14:54:29.105400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.134125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.134398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:20.759 [2024-04-17 14:54:29.134595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.581 ms 00:35:20.759 [2024-04-17 14:54:29.134645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.134782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.134855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:20.759 [2024-04-17 14:54:29.134932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:35:20.759 [2024-04-17 14:54:29.134967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.209856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.210123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:20.759 [2024-04-17 14:54:29.210217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.782 ms 00:35:20.759 [2024-04-17 14:54:29.210268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.210357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.210406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:20.759 [2024-04-17 14:54:29.210510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:20.759 [2024-04-17 14:54:29.210554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.211107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.211223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:20.759 [2024-04-17 14:54:29.211302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:35:20.759 [2024-04-17 14:54:29.211341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.211571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.211621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:20.759 [2024-04-17 14:54:29.211753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:35:20.759 [2024-04-17 14:54:29.211795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.237921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.238149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:20.759 [2024-04-17 14:54:29.238243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.071 ms 00:35:20.759 [2024-04-17 14:54:29.238284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.261894] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:20.759 [2024-04-17 14:54:29.262146] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:20.759 [2024-04-17 14:54:29.262287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.262333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:20.759 [2024-04-17 14:54:29.262410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.811 ms 00:35:20.759 [2024-04-17 14:54:29.262453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.297423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.297707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:20.759 [2024-04-17 14:54:29.297815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.808 ms 00:35:20.759 [2024-04-17 14:54:29.297859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.322284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.322550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:20.759 [2024-04-17 14:54:29.322640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.331 ms 00:35:20.759 [2024-04-17 14:54:29.322680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.345628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.345841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:20.759 [2024-04-17 14:54:29.345982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.843 ms 00:35:20.759 [2024-04-17 14:54:29.346024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:20.759 [2024-04-17 14:54:29.346716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:20.759 [2024-04-17 14:54:29.346851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:20.760 [2024-04-17 14:54:29.346940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:35:20.760 [2024-04-17 14:54:29.346981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.019 [2024-04-17 14:54:29.456796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.019 [2024-04-17 14:54:29.457049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:21.019 [2024-04-17 14:54:29.457155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.670 ms 00:35:21.019 [2024-04-17 14:54:29.457205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.019 [2024-04-17 14:54:29.473592] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:21.019 [2024-04-17 14:54:29.477327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.019 [2024-04-17 14:54:29.477500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:21.019 [2024-04-17 14:54:29.477612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.020 ms 00:35:21.019 [2024-04-17 14:54:29.477666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.019 [2024-04-17 14:54:29.477804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.019 [2024-04-17 14:54:29.477969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:21.019 [2024-04-17 14:54:29.478021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:21.019 [2024-04-17 14:54:29.478056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.019 [2024-04-17 14:54:29.479698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.019 [2024-04-17 14:54:29.479838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:21.019 [2024-04-17 14:54:29.479918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:35:21.019 [2024-04-17 14:54:29.479955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.019 [2024-04-17 14:54:29.482291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.019 [2024-04-17 14:54:29.482420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:35:21.019 [2024-04-17 14:54:29.482569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.269 ms 00:35:21.019 [2024-04-17 14:54:29.482619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.019 [2024-04-17 14:54:29.482683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.019 [2024-04-17 14:54:29.482722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:21.019 [2024-04-17 14:54:29.482758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:21.019 [2024-04-17 14:54:29.482852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.019 [2024-04-17 14:54:29.482944] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:21.019 [2024-04-17 14:54:29.482987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.019 [2024-04-17 14:54:29.483028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:21.019 [2024-04-17 14:54:29.483064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:35:21.019 [2024-04-17 14:54:29.483198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.019 [2024-04-17 14:54:29.527870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.019 [2024-04-17 14:54:29.528065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:21.019 [2024-04-17 14:54:29.528173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.615 ms 00:35:21.019 [2024-04-17 14:54:29.528215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.019 [2024-04-17 14:54:29.528334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:21.019 [2024-04-17 14:54:29.528431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:21.019 [2024-04-17 14:54:29.528472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:35:21.019 [2024-04-17 14:54:29.528527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:21.019 [2024-04-17 14:54:29.535130] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 480.506 ms, result 0 00:35:51.384  Copying: 944/1048576 [kB] (944 kBps) Copying: 4640/1048576 [kB] (3696 kBps) Copying: 34/1024 [MB] (29 MBps) Copying: 69/1024 [MB] (35 MBps) Copying: 103/1024 [MB] (34 MBps) Copying: 138/1024 [MB] (35 MBps) Copying: 177/1024 [MB] (38 MBps) Copying: 216/1024 [MB] (39 MBps) Copying: 256/1024 [MB] (39 MBps) Copying: 295/1024 [MB] (39 MBps) Copying: 334/1024 [MB] (38 MBps) Copying: 373/1024 [MB] (39 MBps) Copying: 412/1024 [MB] (38 MBps) Copying: 450/1024 [MB] (37 MBps) Copying: 488/1024 [MB] (38 MBps) Copying: 526/1024 [MB] (37 MBps) Copying: 564/1024 [MB] (37 MBps) Copying: 601/1024 [MB] (37 MBps) Copying: 638/1024 [MB] (37 MBps) Copying: 676/1024 [MB] (37 MBps) Copying: 713/1024 [MB] (37 MBps) Copying: 750/1024 [MB] (37 MBps) Copying: 784/1024 [MB] (33 MBps) Copying: 823/1024 [MB] (39 MBps) Copying: 861/1024 [MB] (38 MBps) Copying: 899/1024 [MB] (37 MBps) Copying: 939/1024 [MB] (39 MBps) Copying: 979/1024 [MB] (39 MBps) Copying: 1017/1024 [MB] (37 MBps) Copying: 1024/1024 [MB] (average 35 MBps)[2024-04-17 14:54:59.755750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.384 [2024-04-17 14:54:59.756032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:51.384 [2024-04-17 14:54:59.756167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:51.384 [2024-04-17 14:54:59.756218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.384 [2024-04-17 14:54:59.756291] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:51.384 [2024-04-17 14:54:59.762066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.384 [2024-04-17 14:54:59.762305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:51.384 [2024-04-17 14:54:59.762455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.599 ms 00:35:51.384 [2024-04-17 14:54:59.762529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.384 [2024-04-17 14:54:59.762977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.384 [2024-04-17 14:54:59.763132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:51.384 [2024-04-17 14:54:59.763253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:35:51.384 [2024-04-17 14:54:59.763389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.384 [2024-04-17 14:54:59.776629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.384 [2024-04-17 14:54:59.776831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:51.384 [2024-04-17 14:54:59.776950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.165 ms 00:35:51.384 [2024-04-17 14:54:59.777006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.384 [2024-04-17 14:54:59.785826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.384 [2024-04-17 14:54:59.786106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:35:51.384 [2024-04-17 14:54:59.786290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.732 ms 00:35:51.384 [2024-04-17 14:54:59.786417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.384 [2024-04-17 14:54:59.827784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.384 [2024-04-17 14:54:59.828065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:51.384 [2024-04-17 14:54:59.828207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.118 ms 00:35:51.384 [2024-04-17 14:54:59.828273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.385 [2024-04-17 14:54:59.863152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.385 [2024-04-17 14:54:59.863435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:51.385 [2024-04-17 14:54:59.863651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.757 ms 00:35:51.385 [2024-04-17 14:54:59.863715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.385 [2024-04-17 14:54:59.867564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.385 [2024-04-17 14:54:59.867761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:51.385 [2024-04-17 14:54:59.867877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.740 ms 00:35:51.385 [2024-04-17 14:54:59.867931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.385 [2024-04-17 14:54:59.932843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.385 [2024-04-17 14:54:59.933113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:35:51.385 [2024-04-17 14:54:59.933235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.753 ms 00:35:51.385 [2024-04-17 14:54:59.933291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.644 [2024-04-17 14:54:59.996083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.644 [2024-04-17 14:54:59.996379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:35:51.644 [2024-04-17 14:54:59.996580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.686 ms 00:35:51.644 [2024-04-17 14:54:59.996655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.644 [2024-04-17 14:55:00.059593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.644 [2024-04-17 14:55:00.059896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:51.644 [2024-04-17 14:55:00.060094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.816 ms 00:35:51.644 [2024-04-17 14:55:00.060153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.644 [2024-04-17 14:55:00.123358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.644 [2024-04-17 14:55:00.123677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:51.644 [2024-04-17 14:55:00.123807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.995 ms 00:35:51.644 [2024-04-17 14:55:00.123864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.644 [2024-04-17 14:55:00.123972] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:51.644 [2024-04-17 14:55:00.124101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:51.644 [2024-04-17 14:55:00.124188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:35:51.644 [2024-04-17 14:55:00.124322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.124404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.124479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.124764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.124849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.124931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.125014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.125180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.125269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.125351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.125433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.125596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.125681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.125763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.125922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.126094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.126221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.126305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.126415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.126508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.126666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.126745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.126820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.126895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.127044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.127131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.127214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.127386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.127468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.127593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.127728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.127894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.128029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:51.644 [2024-04-17 14:55:00.128114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.128261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.128347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.128474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.128580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.128820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.128903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.128985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.129067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.129242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.129370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.129452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.129551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.129634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.129887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.129969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.130053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.130136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.130307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.130499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.130580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.130660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.130744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.130825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.131054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.131136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.131219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.131301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.131504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.131618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.131699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.131782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.131864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.132047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.132137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.132272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.132358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.132623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.132711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.132794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.132877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.133031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.133123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.133206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.133289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.133436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.133561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.133655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.133812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.133957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.134100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.134190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.134268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.134554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.134640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.134720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.134894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.135049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.135138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.135246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.135379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.135570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.135684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.135884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.136063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:51.645 [2024-04-17 14:55:00.136167] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:51.645 [2024-04-17 14:55:00.136223] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6bac9abf-074b-45be-bea1-6bcd4aa06261 00:35:51.645 [2024-04-17 14:55:00.136359] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:35:51.645 [2024-04-17 14:55:00.136411] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 144832 00:35:51.645 [2024-04-17 14:55:00.136464] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 142848 00:35:51.645 [2024-04-17 14:55:00.136541] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0139 00:35:51.645 [2024-04-17 14:55:00.136619] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:51.645 [2024-04-17 14:55:00.136735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:51.645 [2024-04-17 14:55:00.136793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:51.645 [2024-04-17 14:55:00.136839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:51.645 [2024-04-17 14:55:00.136907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:51.645 [2024-04-17 14:55:00.136959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.646 [2024-04-17 14:55:00.137058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:51.646 [2024-04-17 14:55:00.137172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.987 ms 00:35:51.646 [2024-04-17 14:55:00.137233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.646 [2024-04-17 14:55:00.161591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.646 [2024-04-17 14:55:00.161809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:51.646 [2024-04-17 14:55:00.161905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.171 ms 00:35:51.646 [2024-04-17 14:55:00.161944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.646 [2024-04-17 14:55:00.162280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:51.646 [2024-04-17 14:55:00.162441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:51.646 [2024-04-17 14:55:00.162571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:35:51.646 [2024-04-17 14:55:00.162665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.646 [2024-04-17 14:55:00.225598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.646 [2024-04-17 14:55:00.225863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:51.646 [2024-04-17 14:55:00.225980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.646 [2024-04-17 14:55:00.226026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.646 [2024-04-17 14:55:00.226133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.646 [2024-04-17 14:55:00.226172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:51.646 [2024-04-17 14:55:00.226261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.646 [2024-04-17 14:55:00.226303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.646 [2024-04-17 14:55:00.226453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.646 [2024-04-17 14:55:00.226536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:51.646 [2024-04-17 14:55:00.226662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.646 [2024-04-17 14:55:00.226700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.646 [2024-04-17 14:55:00.226803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.646 [2024-04-17 14:55:00.226847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:51.646 [2024-04-17 14:55:00.226885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.646 [2024-04-17 14:55:00.226921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.904 [2024-04-17 14:55:00.355875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.904 [2024-04-17 14:55:00.356149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:51.904 [2024-04-17 14:55:00.356273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.904 [2024-04-17 14:55:00.356318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.904 [2024-04-17 14:55:00.410267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.904 [2024-04-17 14:55:00.410537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:51.904 [2024-04-17 14:55:00.410634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.904 [2024-04-17 14:55:00.410677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.904 [2024-04-17 14:55:00.410793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.904 [2024-04-17 14:55:00.410832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:51.904 [2024-04-17 14:55:00.410963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.904 [2024-04-17 14:55:00.411051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.904 [2024-04-17 14:55:00.411126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.904 [2024-04-17 14:55:00.411164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:51.904 [2024-04-17 14:55:00.411198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.904 [2024-04-17 14:55:00.411232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.904 [2024-04-17 14:55:00.411386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.904 [2024-04-17 14:55:00.411478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:51.904 [2024-04-17 14:55:00.411584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.904 [2024-04-17 14:55:00.411620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.904 [2024-04-17 14:55:00.411699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.904 [2024-04-17 14:55:00.411737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:51.904 [2024-04-17 14:55:00.411769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.904 [2024-04-17 14:55:00.411801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.904 [2024-04-17 14:55:00.411863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.904 [2024-04-17 14:55:00.411976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:51.904 [2024-04-17 14:55:00.412070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.904 [2024-04-17 14:55:00.412103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.904 [2024-04-17 14:55:00.412176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:51.904 [2024-04-17 14:55:00.412211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:51.904 [2024-04-17 14:55:00.412244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:51.904 [2024-04-17 14:55:00.412276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:51.904 [2024-04-17 14:55:00.412444] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 656.662 ms, result 0 00:35:53.279 00:35:53.279 00:35:53.279 14:55:01 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:55.845 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:55.845 14:55:04 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:55.845 [2024-04-17 14:55:04.243085] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:35:55.845 [2024-04-17 14:55:04.243251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83206 ] 00:35:55.845 [2024-04-17 14:55:04.427029] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.474 [2024-04-17 14:55:04.754330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.733 [2024-04-17 14:55:05.221016] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:56.733 [2024-04-17 14:55:05.221087] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:56.992 [2024-04-17 14:55:05.384797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.992 [2024-04-17 14:55:05.384865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:56.992 [2024-04-17 14:55:05.384883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:56.992 [2024-04-17 14:55:05.384896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.992 [2024-04-17 14:55:05.384976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.992 [2024-04-17 14:55:05.384996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:56.992 [2024-04-17 14:55:05.385014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:35:56.992 [2024-04-17 14:55:05.385025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.992 [2024-04-17 14:55:05.385052] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:56.992 [2024-04-17 14:55:05.386402] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:56.992 [2024-04-17 14:55:05.386434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.992 [2024-04-17 14:55:05.386447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:56.992 [2024-04-17 14:55:05.386460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.387 ms 00:35:56.992 [2024-04-17 14:55:05.386472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.992 [2024-04-17 14:55:05.388077] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:56.992 [2024-04-17 14:55:05.414454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.992 [2024-04-17 14:55:05.414554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:56.992 [2024-04-17 14:55:05.414577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.372 ms 00:35:56.992 [2024-04-17 14:55:05.414593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.992 [2024-04-17 14:55:05.414707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.992 [2024-04-17 14:55:05.414726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:56.992 [2024-04-17 14:55:05.414748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:35:56.992 [2024-04-17 14:55:05.414768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.992 [2024-04-17 14:55:05.423599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.992 [2024-04-17 14:55:05.423673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:56.992 [2024-04-17 14:55:05.423700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.679 ms 00:35:56.992 [2024-04-17 14:55:05.423716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.992 [2024-04-17 14:55:05.423875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.992 [2024-04-17 14:55:05.423901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:56.992 [2024-04-17 14:55:05.423923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:35:56.992 [2024-04-17 14:55:05.423939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.992 [2024-04-17 14:55:05.424011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.992 [2024-04-17 14:55:05.424030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:56.992 [2024-04-17 14:55:05.424043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:56.992 [2024-04-17 14:55:05.424055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.992 [2024-04-17 14:55:05.424088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:56.992 [2024-04-17 14:55:05.431205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.992 [2024-04-17 14:55:05.431287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:56.992 [2024-04-17 14:55:05.431313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.122 ms 00:35:56.992 [2024-04-17 14:55:05.431333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.992 [2024-04-17 14:55:05.431402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.992 [2024-04-17 14:55:05.431423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:56.992 [2024-04-17 14:55:05.431444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:35:56.992 [2024-04-17 14:55:05.431464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.992 [2024-04-17 14:55:05.431584] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:56.992 [2024-04-17 14:55:05.431616] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:35:56.992 [2024-04-17 14:55:05.431656] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:56.992 [2024-04-17 14:55:05.431677] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:35:56.992 [2024-04-17 14:55:05.431759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:35:56.992 [2024-04-17 14:55:05.431786] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:56.992 [2024-04-17 14:55:05.431806] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:35:56.992 [2024-04-17 14:55:05.431822] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:56.992 [2024-04-17 14:55:05.431851] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:56.992 [2024-04-17 14:55:05.431873] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:56.992 [2024-04-17 14:55:05.431893] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:56.992 [2024-04-17 14:55:05.431913] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:35:56.992 [2024-04-17 14:55:05.431933] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:35:56.992 [2024-04-17 14:55:05.431955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.993 [2024-04-17 14:55:05.431977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:56.993 [2024-04-17 14:55:05.432000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:35:56.993 [2024-04-17 14:55:05.432020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.993 [2024-04-17 14:55:05.432122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.993 [2024-04-17 14:55:05.432148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:56.993 [2024-04-17 14:55:05.432171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:35:56.993 [2024-04-17 14:55:05.432191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.993 [2024-04-17 14:55:05.432301] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:56.993 [2024-04-17 14:55:05.432337] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:56.993 [2024-04-17 14:55:05.432360] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:56.993 [2024-04-17 14:55:05.432382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:56.993 [2024-04-17 14:55:05.432403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:56.993 [2024-04-17 14:55:05.432423] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:56.993 [2024-04-17 14:55:05.432444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:56.993 [2024-04-17 14:55:05.432464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:56.993 [2024-04-17 14:55:05.432485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:56.993 [2024-04-17 14:55:05.432520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:56.993 [2024-04-17 14:55:05.432541] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:56.993 [2024-04-17 14:55:05.432561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:56.993 [2024-04-17 14:55:05.432598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:56.993 [2024-04-17 14:55:05.432618] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:56.993 [2024-04-17 14:55:05.432639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:35:56.993 [2024-04-17 14:55:05.432659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:56.993 [2024-04-17 14:55:05.432678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:56.993 [2024-04-17 14:55:05.432698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:35:56.993 [2024-04-17 14:55:05.432717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:56.993 [2024-04-17 14:55:05.432737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:35:56.993 [2024-04-17 14:55:05.432758] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:35:56.993 [2024-04-17 14:55:05.432779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:35:56.993 [2024-04-17 14:55:05.432798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:56.993 [2024-04-17 14:55:05.432817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:56.993 [2024-04-17 14:55:05.432836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:35:56.993 [2024-04-17 14:55:05.432855] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:56.993 [2024-04-17 14:55:05.432872] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:35:56.993 [2024-04-17 14:55:05.432886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:35:56.993 [2024-04-17 14:55:05.432905] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:56.993 [2024-04-17 14:55:05.432925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:56.993 [2024-04-17 14:55:05.432944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:35:56.993 [2024-04-17 14:55:05.432963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:56.993 [2024-04-17 14:55:05.432984] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:35:56.993 [2024-04-17 14:55:05.433004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:35:56.993 [2024-04-17 14:55:05.433023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:56.993 [2024-04-17 14:55:05.433043] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:56.993 [2024-04-17 14:55:05.433062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:56.993 [2024-04-17 14:55:05.433082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:56.993 [2024-04-17 14:55:05.433102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:35:56.993 [2024-04-17 14:55:05.433122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:56.993 [2024-04-17 14:55:05.433141] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:56.993 [2024-04-17 14:55:05.433170] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:56.993 [2024-04-17 14:55:05.433192] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:56.993 [2024-04-17 14:55:05.433219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:56.993 [2024-04-17 14:55:05.433239] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:56.993 [2024-04-17 14:55:05.433260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:56.993 [2024-04-17 14:55:05.433280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:56.993 [2024-04-17 14:55:05.433299] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:56.993 [2024-04-17 14:55:05.433319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:56.993 [2024-04-17 14:55:05.433339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:56.993 [2024-04-17 14:55:05.433362] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:56.993 [2024-04-17 14:55:05.433387] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:56.993 [2024-04-17 14:55:05.433410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:56.993 [2024-04-17 14:55:05.433432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:35:56.993 [2024-04-17 14:55:05.433454] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:35:56.993 [2024-04-17 14:55:05.433476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:35:56.993 [2024-04-17 14:55:05.433524] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:35:56.993 [2024-04-17 14:55:05.433547] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:35:56.993 [2024-04-17 14:55:05.433568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:35:56.993 [2024-04-17 14:55:05.433591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:35:56.993 [2024-04-17 14:55:05.433613] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:35:56.993 [2024-04-17 14:55:05.433635] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:35:56.993 [2024-04-17 14:55:05.433657] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:35:56.993 [2024-04-17 14:55:05.433680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:35:56.993 [2024-04-17 14:55:05.433702] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:35:56.993 [2024-04-17 14:55:05.433722] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:56.993 [2024-04-17 14:55:05.433745] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:56.993 [2024-04-17 14:55:05.433769] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:56.993 [2024-04-17 14:55:05.433790] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:56.993 [2024-04-17 14:55:05.433812] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:56.993 [2024-04-17 14:55:05.433833] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:56.993 [2024-04-17 14:55:05.433856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.993 [2024-04-17 14:55:05.433878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:56.993 [2024-04-17 14:55:05.433899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:35:56.994 [2024-04-17 14:55:05.433920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.994 [2024-04-17 14:55:05.462505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.994 [2024-04-17 14:55:05.462566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:56.994 [2024-04-17 14:55:05.462585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.488 ms 00:35:56.994 [2024-04-17 14:55:05.462598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.994 [2024-04-17 14:55:05.462714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.994 [2024-04-17 14:55:05.462733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:56.994 [2024-04-17 14:55:05.462746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:56.994 [2024-04-17 14:55:05.462758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.994 [2024-04-17 14:55:05.534499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.994 [2024-04-17 14:55:05.534560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:56.994 [2024-04-17 14:55:05.534588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.646 ms 00:35:56.994 [2024-04-17 14:55:05.534605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.994 [2024-04-17 14:55:05.534671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.994 [2024-04-17 14:55:05.534693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:56.994 [2024-04-17 14:55:05.534713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:56.994 [2024-04-17 14:55:05.534726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.994 [2024-04-17 14:55:05.535236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.994 [2024-04-17 14:55:05.535267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:56.994 [2024-04-17 14:55:05.535281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:35:56.994 [2024-04-17 14:55:05.535293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.994 [2024-04-17 14:55:05.535433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.994 [2024-04-17 14:55:05.535449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:56.994 [2024-04-17 14:55:05.535462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:35:56.994 [2024-04-17 14:55:05.535473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.994 [2024-04-17 14:55:05.562248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.994 [2024-04-17 14:55:05.562308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:56.994 [2024-04-17 14:55:05.562330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.735 ms 00:35:56.994 [2024-04-17 14:55:05.562346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:56.994 [2024-04-17 14:55:05.586754] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:56.994 [2024-04-17 14:55:05.586840] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:56.994 [2024-04-17 14:55:05.586861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:56.994 [2024-04-17 14:55:05.586874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:56.994 [2024-04-17 14:55:05.586890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.332 ms 00:35:56.994 [2024-04-17 14:55:05.586902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.253 [2024-04-17 14:55:05.625325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.253 [2024-04-17 14:55:05.625409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:57.253 [2024-04-17 14:55:05.625427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.354 ms 00:35:57.253 [2024-04-17 14:55:05.625440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.253 [2024-04-17 14:55:05.650129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.253 [2024-04-17 14:55:05.650200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:57.253 [2024-04-17 14:55:05.650218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.560 ms 00:35:57.253 [2024-04-17 14:55:05.650243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.253 [2024-04-17 14:55:05.674115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.253 [2024-04-17 14:55:05.674191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:57.253 [2024-04-17 14:55:05.674209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.786 ms 00:35:57.253 [2024-04-17 14:55:05.674222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.253 [2024-04-17 14:55:05.674886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.253 [2024-04-17 14:55:05.674922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:57.253 [2024-04-17 14:55:05.674938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:35:57.253 [2024-04-17 14:55:05.674950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.253 [2024-04-17 14:55:05.787557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.253 [2024-04-17 14:55:05.787630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:57.253 [2024-04-17 14:55:05.787649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.572 ms 00:35:57.253 [2024-04-17 14:55:05.787662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.253 [2024-04-17 14:55:05.804720] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:57.253 [2024-04-17 14:55:05.808392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.253 [2024-04-17 14:55:05.808442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:57.253 [2024-04-17 14:55:05.808459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.640 ms 00:35:57.253 [2024-04-17 14:55:05.808472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.253 [2024-04-17 14:55:05.808608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.253 [2024-04-17 14:55:05.808624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:57.253 [2024-04-17 14:55:05.808638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:57.253 [2024-04-17 14:55:05.808650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.253 [2024-04-17 14:55:05.809655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.253 [2024-04-17 14:55:05.809687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:57.253 [2024-04-17 14:55:05.809700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:35:57.253 [2024-04-17 14:55:05.809712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.253 [2024-04-17 14:55:05.812219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.253 [2024-04-17 14:55:05.812265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:35:57.253 [2024-04-17 14:55:05.812279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.473 ms 00:35:57.253 [2024-04-17 14:55:05.812292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.253 [2024-04-17 14:55:05.812330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.253 [2024-04-17 14:55:05.812343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:57.254 [2024-04-17 14:55:05.812356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:57.254 [2024-04-17 14:55:05.812367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.254 [2024-04-17 14:55:05.812406] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:57.254 [2024-04-17 14:55:05.812421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.254 [2024-04-17 14:55:05.812433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:57.254 [2024-04-17 14:55:05.812448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:35:57.254 [2024-04-17 14:55:05.812460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.512 [2024-04-17 14:55:05.859768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.512 [2024-04-17 14:55:05.859837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:57.512 [2024-04-17 14:55:05.859856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.258 ms 00:35:57.512 [2024-04-17 14:55:05.859869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.512 [2024-04-17 14:55:05.859982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:57.512 [2024-04-17 14:55:05.860004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:57.512 [2024-04-17 14:55:05.860018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:35:57.512 [2024-04-17 14:55:05.860030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:57.512 [2024-04-17 14:55:05.861342] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 476.050 ms, result 0 00:36:29.077  Copying: 31/1024 [MB] (31 MBps) Copying: 65/1024 [MB] (33 MBps) Copying: 97/1024 [MB] (32 MBps) Copying: 128/1024 [MB] (30 MBps) Copying: 161/1024 [MB] (32 MBps) Copying: 196/1024 [MB] (35 MBps) Copying: 230/1024 [MB] (34 MBps) Copying: 265/1024 [MB] (34 MBps) Copying: 298/1024 [MB] (33 MBps) Copying: 331/1024 [MB] (32 MBps) Copying: 365/1024 [MB] (33 MBps) Copying: 400/1024 [MB] (34 MBps) Copying: 434/1024 [MB] (34 MBps) Copying: 464/1024 [MB] (30 MBps) Copying: 495/1024 [MB] (30 MBps) Copying: 525/1024 [MB] (30 MBps) Copying: 556/1024 [MB] (30 MBps) Copying: 588/1024 [MB] (32 MBps) Copying: 621/1024 [MB] (32 MBps) Copying: 653/1024 [MB] (31 MBps) Copying: 685/1024 [MB] (32 MBps) Copying: 717/1024 [MB] (32 MBps) Copying: 748/1024 [MB] (30 MBps) Copying: 780/1024 [MB] (31 MBps) Copying: 812/1024 [MB] (32 MBps) Copying: 845/1024 [MB] (33 MBps) Copying: 879/1024 [MB] (33 MBps) Copying: 911/1024 [MB] (31 MBps) Copying: 942/1024 [MB] (31 MBps) Copying: 975/1024 [MB] (32 MBps) Copying: 1009/1024 [MB] (34 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-04-17 14:55:37.648175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.077 [2024-04-17 14:55:37.648270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:29.077 [2024-04-17 14:55:37.648301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:29.077 [2024-04-17 14:55:37.648325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.077 [2024-04-17 14:55:37.648370] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:29.077 [2024-04-17 14:55:37.655037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.077 [2024-04-17 14:55:37.655102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:29.077 [2024-04-17 14:55:37.655123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.629 ms 00:36:29.077 [2024-04-17 14:55:37.655140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.077 [2024-04-17 14:55:37.655524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.077 [2024-04-17 14:55:37.655554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:29.077 [2024-04-17 14:55:37.655571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:36:29.077 [2024-04-17 14:55:37.655586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.077 [2024-04-17 14:55:37.660618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.077 [2024-04-17 14:55:37.660655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:29.077 [2024-04-17 14:55:37.660672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.007 ms 00:36:29.077 [2024-04-17 14:55:37.660688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.077 [2024-04-17 14:55:37.668958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.077 [2024-04-17 14:55:37.669014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:36:29.077 [2024-04-17 14:55:37.669033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.241 ms 00:36:29.077 [2024-04-17 14:55:37.669049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.335 [2024-04-17 14:55:37.730789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.335 [2024-04-17 14:55:37.730873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:29.335 [2024-04-17 14:55:37.730896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.620 ms 00:36:29.335 [2024-04-17 14:55:37.730912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.335 [2024-04-17 14:55:37.763886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.335 [2024-04-17 14:55:37.763960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:29.335 [2024-04-17 14:55:37.763984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.896 ms 00:36:29.335 [2024-04-17 14:55:37.764001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.335 [2024-04-17 14:55:37.768417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.335 [2024-04-17 14:55:37.768474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:29.335 [2024-04-17 14:55:37.768510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.332 ms 00:36:29.335 [2024-04-17 14:55:37.768540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.336 [2024-04-17 14:55:37.829348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.336 [2024-04-17 14:55:37.829441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:36:29.336 [2024-04-17 14:55:37.829465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.779 ms 00:36:29.336 [2024-04-17 14:55:37.829481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.336 [2024-04-17 14:55:37.891321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.336 [2024-04-17 14:55:37.891385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:36:29.336 [2024-04-17 14:55:37.891412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.753 ms 00:36:29.336 [2024-04-17 14:55:37.891432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.595 [2024-04-17 14:55:37.941002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.595 [2024-04-17 14:55:37.941055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:29.595 [2024-04-17 14:55:37.941071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.475 ms 00:36:29.595 [2024-04-17 14:55:37.941082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.595 [2024-04-17 14:55:37.984045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.595 [2024-04-17 14:55:37.984096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:29.595 [2024-04-17 14:55:37.984112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.836 ms 00:36:29.595 [2024-04-17 14:55:37.984123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.595 [2024-04-17 14:55:37.984184] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:29.595 [2024-04-17 14:55:37.984205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:29.595 [2024-04-17 14:55:37.984221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:36:29.595 [2024-04-17 14:55:37.984235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:29.595 [2024-04-17 14:55:37.984248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:29.595 [2024-04-17 14:55:37.984260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:29.595 [2024-04-17 14:55:37.984274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:29.595 [2024-04-17 14:55:37.984287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:29.595 [2024-04-17 14:55:37.984300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:29.595 [2024-04-17 14:55:37.984312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:29.595 [2024-04-17 14:55:37.984325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:29.595 [2024-04-17 14:55:37.984337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:29.595 [2024-04-17 14:55:37.984350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:29.595 [2024-04-17 14:55:37.984362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.984996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:29.596 [2024-04-17 14:55:37.985261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:29.597 [2024-04-17 14:55:37.985481] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:29.597 [2024-04-17 14:55:37.985512] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6bac9abf-074b-45be-bea1-6bcd4aa06261 00:36:29.597 [2024-04-17 14:55:37.985524] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:36:29.597 [2024-04-17 14:55:37.985543] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:29.597 [2024-04-17 14:55:37.985554] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:29.597 [2024-04-17 14:55:37.985578] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:29.597 [2024-04-17 14:55:37.985589] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:29.597 [2024-04-17 14:55:37.985601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:29.597 [2024-04-17 14:55:37.985612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:29.597 [2024-04-17 14:55:37.985623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:29.597 [2024-04-17 14:55:37.985633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:29.597 [2024-04-17 14:55:37.985645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.597 [2024-04-17 14:55:37.985657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:29.597 [2024-04-17 14:55:37.985669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:36:29.597 [2024-04-17 14:55:37.985680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.597 [2024-04-17 14:55:38.008374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.597 [2024-04-17 14:55:38.008415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:29.597 [2024-04-17 14:55:38.008446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.637 ms 00:36:29.597 [2024-04-17 14:55:38.008457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.597 [2024-04-17 14:55:38.008723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:29.597 [2024-04-17 14:55:38.008740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:29.597 [2024-04-17 14:55:38.008752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:36:29.597 [2024-04-17 14:55:38.008763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.597 [2024-04-17 14:55:38.072260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.597 [2024-04-17 14:55:38.072320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:29.597 [2024-04-17 14:55:38.072353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.597 [2024-04-17 14:55:38.072365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.597 [2024-04-17 14:55:38.072441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.597 [2024-04-17 14:55:38.072454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:29.597 [2024-04-17 14:55:38.072467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.597 [2024-04-17 14:55:38.072478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.597 [2024-04-17 14:55:38.072595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.597 [2024-04-17 14:55:38.072610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:29.597 [2024-04-17 14:55:38.072623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.597 [2024-04-17 14:55:38.072634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.597 [2024-04-17 14:55:38.072653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.597 [2024-04-17 14:55:38.072666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:29.597 [2024-04-17 14:55:38.072677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.597 [2024-04-17 14:55:38.072688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.915 [2024-04-17 14:55:38.209470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.915 [2024-04-17 14:55:38.209545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:29.915 [2024-04-17 14:55:38.209561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.915 [2024-04-17 14:55:38.209589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.915 [2024-04-17 14:55:38.264209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.915 [2024-04-17 14:55:38.264273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:29.915 [2024-04-17 14:55:38.264289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.915 [2024-04-17 14:55:38.264302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.915 [2024-04-17 14:55:38.264407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.915 [2024-04-17 14:55:38.264420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:29.915 [2024-04-17 14:55:38.264432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.915 [2024-04-17 14:55:38.264442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.915 [2024-04-17 14:55:38.264507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.915 [2024-04-17 14:55:38.264520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:29.915 [2024-04-17 14:55:38.264532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.915 [2024-04-17 14:55:38.264543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.915 [2024-04-17 14:55:38.264706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.915 [2024-04-17 14:55:38.264728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:29.915 [2024-04-17 14:55:38.264740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.915 [2024-04-17 14:55:38.264752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.915 [2024-04-17 14:55:38.264794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.915 [2024-04-17 14:55:38.264808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:29.915 [2024-04-17 14:55:38.264820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.915 [2024-04-17 14:55:38.264831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.915 [2024-04-17 14:55:38.264869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.915 [2024-04-17 14:55:38.264894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:29.915 [2024-04-17 14:55:38.264906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.915 [2024-04-17 14:55:38.264917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.915 [2024-04-17 14:55:38.264964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:29.915 [2024-04-17 14:55:38.264978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:29.916 [2024-04-17 14:55:38.264989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:29.916 [2024-04-17 14:55:38.265001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:29.916 [2024-04-17 14:55:38.265124] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 616.928 ms, result 0 00:36:31.296 00:36:31.296 00:36:31.296 14:55:39 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:36:33.264 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:36:33.264 14:55:41 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:36:33.264 14:55:41 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:36:33.264 14:55:41 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:33.264 14:55:41 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:33.522 14:55:41 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:36:33.522 14:55:42 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:33.522 14:55:42 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:36:33.522 14:55:42 -- ftl/dirty_shutdown.sh@37 -- # killprocess 81552 00:36:33.522 14:55:42 -- common/autotest_common.sh@936 -- # '[' -z 81552 ']' 00:36:33.522 14:55:42 -- common/autotest_common.sh@940 -- # kill -0 81552 00:36:33.522 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (81552) - No such process 00:36:33.522 14:55:42 -- common/autotest_common.sh@963 -- # echo 'Process with pid 81552 is not found' 00:36:33.522 Process with pid 81552 is not found 00:36:33.522 14:55:42 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:36:33.780 14:55:42 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:36:33.780 14:55:42 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:33.780 Remove shared memory files 00:36:33.780 14:55:42 -- ftl/common.sh@205 -- # rm -f rm -f 00:36:33.780 14:55:42 -- ftl/common.sh@206 -- # rm -f rm -f 00:36:33.780 14:55:42 -- ftl/common.sh@207 -- # rm -f rm -f 00:36:33.780 14:55:42 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:33.780 14:55:42 -- ftl/common.sh@209 -- # rm -f rm -f 00:36:33.780 00:36:33.780 real 3m17.886s 00:36:33.780 user 3m42.890s 00:36:33.780 sys 0m37.489s 00:36:33.780 14:55:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:33.780 14:55:42 -- common/autotest_common.sh@10 -- # set +x 00:36:33.780 ************************************ 00:36:33.780 END TEST ftl_dirty_shutdown 00:36:33.780 ************************************ 00:36:34.037 14:55:42 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:36:34.037 14:55:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:36:34.037 14:55:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:34.037 14:55:42 -- common/autotest_common.sh@10 -- # set +x 00:36:34.037 ************************************ 00:36:34.037 START TEST ftl_upgrade_shutdown 00:36:34.037 ************************************ 00:36:34.037 14:55:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:36:34.037 * Looking for test storage... 00:36:34.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:36:34.037 14:55:42 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:36:34.037 14:55:42 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:36:34.037 14:55:42 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:36:34.037 14:55:42 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:36:34.037 14:55:42 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:36:34.037 14:55:42 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:36:34.038 14:55:42 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:34.038 14:55:42 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:36:34.038 14:55:42 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:36:34.038 14:55:42 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:34.038 14:55:42 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:34.038 14:55:42 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:36:34.038 14:55:42 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:36:34.038 14:55:42 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:34.038 14:55:42 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:34.038 14:55:42 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:36:34.038 14:55:42 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:36:34.038 14:55:42 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:34.038 14:55:42 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:34.038 14:55:42 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:36:34.038 14:55:42 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:36:34.038 14:55:42 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:34.038 14:55:42 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:34.038 14:55:42 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:34.038 14:55:42 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:34.038 14:55:42 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:36:34.038 14:55:42 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:36:34.038 14:55:42 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:34.038 14:55:42 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:36:34.038 14:55:42 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:36:34.038 14:55:42 -- ftl/common.sh@81 -- # local base_bdev= 00:36:34.038 14:55:42 -- ftl/common.sh@82 -- # local cache_bdev= 00:36:34.038 14:55:42 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:34.038 14:55:42 -- ftl/common.sh@89 -- # spdk_tgt_pid=83661 00:36:34.038 14:55:42 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:36:34.038 14:55:42 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:34.038 14:55:42 -- ftl/common.sh@91 -- # waitforlisten 83661 00:36:34.038 14:55:42 -- common/autotest_common.sh@817 -- # '[' -z 83661 ']' 00:36:34.038 14:55:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.038 14:55:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:34.038 14:55:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.038 14:55:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:34.038 14:55:42 -- common/autotest_common.sh@10 -- # set +x 00:36:34.362 [2024-04-17 14:55:42.781744] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:36:34.362 [2024-04-17 14:55:42.781905] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83661 ] 00:36:34.620 [2024-04-17 14:55:42.967238] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.620 [2024-04-17 14:55:43.215189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.995 14:55:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:35.995 14:55:44 -- common/autotest_common.sh@850 -- # return 0 00:36:35.995 14:55:44 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:35.995 14:55:44 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:36:35.995 14:55:44 -- ftl/common.sh@99 -- # local params 00:36:35.995 14:55:44 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:35.995 14:55:44 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:36:35.995 14:55:44 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:35.995 14:55:44 -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:36:35.995 14:55:44 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:35.995 14:55:44 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:36:35.995 14:55:44 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:35.995 14:55:44 -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:36:35.995 14:55:44 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:35.995 14:55:44 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:36:35.995 14:55:44 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:36:35.995 14:55:44 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:36:35.995 14:55:44 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:36:35.995 14:55:44 -- ftl/common.sh@54 -- # local name=base 00:36:35.995 14:55:44 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:36:35.995 14:55:44 -- ftl/common.sh@56 -- # local size=20480 00:36:35.995 14:55:44 -- ftl/common.sh@59 -- # local base_bdev 00:36:35.995 14:55:44 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:36:35.995 14:55:44 -- ftl/common.sh@60 -- # base_bdev=basen1 00:36:35.995 14:55:44 -- ftl/common.sh@62 -- # local base_size 00:36:35.995 14:55:44 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:36:35.995 14:55:44 -- common/autotest_common.sh@1364 -- # local bdev_name=basen1 00:36:35.995 14:55:44 -- common/autotest_common.sh@1365 -- # local bdev_info 00:36:35.995 14:55:44 -- common/autotest_common.sh@1366 -- # local bs 00:36:35.995 14:55:44 -- common/autotest_common.sh@1367 -- # local nb 00:36:35.995 14:55:44 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:36:36.252 14:55:44 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:36:36.252 { 00:36:36.252 "name": "basen1", 00:36:36.252 "aliases": [ 00:36:36.252 "44531ded-e239-41c6-acac-20bcb3f39b4e" 00:36:36.252 ], 00:36:36.252 "product_name": "NVMe disk", 00:36:36.252 "block_size": 4096, 00:36:36.252 "num_blocks": 1310720, 00:36:36.252 "uuid": "44531ded-e239-41c6-acac-20bcb3f39b4e", 00:36:36.252 "assigned_rate_limits": { 00:36:36.252 "rw_ios_per_sec": 0, 00:36:36.252 "rw_mbytes_per_sec": 0, 00:36:36.252 "r_mbytes_per_sec": 0, 00:36:36.252 "w_mbytes_per_sec": 0 00:36:36.252 }, 00:36:36.253 "claimed": true, 00:36:36.253 "claim_type": "read_many_write_one", 00:36:36.253 "zoned": false, 00:36:36.253 "supported_io_types": { 00:36:36.253 "read": true, 00:36:36.253 "write": true, 00:36:36.253 "unmap": true, 00:36:36.253 "write_zeroes": true, 00:36:36.253 "flush": true, 00:36:36.253 "reset": true, 00:36:36.253 "compare": true, 00:36:36.253 "compare_and_write": false, 00:36:36.253 "abort": true, 00:36:36.253 "nvme_admin": true, 00:36:36.253 "nvme_io": true 00:36:36.253 }, 00:36:36.253 "driver_specific": { 00:36:36.253 "nvme": [ 00:36:36.253 { 00:36:36.253 "pci_address": "0000:00:11.0", 00:36:36.253 "trid": { 00:36:36.253 "trtype": "PCIe", 00:36:36.253 "traddr": "0000:00:11.0" 00:36:36.253 }, 00:36:36.253 "ctrlr_data": { 00:36:36.253 "cntlid": 0, 00:36:36.253 "vendor_id": "0x1b36", 00:36:36.253 "model_number": "QEMU NVMe Ctrl", 00:36:36.253 "serial_number": "12341", 00:36:36.253 "firmware_revision": "8.0.0", 00:36:36.253 "subnqn": "nqn.2019-08.org.qemu:12341", 00:36:36.253 "oacs": { 00:36:36.253 "security": 0, 00:36:36.253 "format": 1, 00:36:36.253 "firmware": 0, 00:36:36.253 "ns_manage": 1 00:36:36.253 }, 00:36:36.253 "multi_ctrlr": false, 00:36:36.253 "ana_reporting": false 00:36:36.253 }, 00:36:36.253 "vs": { 00:36:36.253 "nvme_version": "1.4" 00:36:36.253 }, 00:36:36.253 "ns_data": { 00:36:36.253 "id": 1, 00:36:36.253 "can_share": false 00:36:36.253 } 00:36:36.253 } 00:36:36.253 ], 00:36:36.253 "mp_policy": "active_passive" 00:36:36.253 } 00:36:36.253 } 00:36:36.253 ]' 00:36:36.253 14:55:44 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:36:36.253 14:55:44 -- common/autotest_common.sh@1369 -- # bs=4096 00:36:36.253 14:55:44 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:36:36.253 14:55:44 -- common/autotest_common.sh@1370 -- # nb=1310720 00:36:36.253 14:55:44 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:36:36.253 14:55:44 -- common/autotest_common.sh@1374 -- # echo 5120 00:36:36.253 14:55:44 -- ftl/common.sh@63 -- # base_size=5120 00:36:36.253 14:55:44 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:36:36.253 14:55:44 -- ftl/common.sh@67 -- # clear_lvols 00:36:36.253 14:55:44 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:36.253 14:55:44 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:36.821 14:55:45 -- ftl/common.sh@28 -- # stores=d4b5f692-51df-439b-a95f-dc4caab03153 00:36:36.821 14:55:45 -- ftl/common.sh@29 -- # for lvs in $stores 00:36:36.821 14:55:45 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d4b5f692-51df-439b-a95f-dc4caab03153 00:36:37.080 14:55:45 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:36:37.338 14:55:45 -- ftl/common.sh@68 -- # lvs=ad45aea4-f263-4b99-a515-7787d4dc0f0b 00:36:37.338 14:55:45 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u ad45aea4-f263-4b99-a515-7787d4dc0f0b 00:36:37.596 14:55:46 -- ftl/common.sh@107 -- # base_bdev=2b05d72c-c67c-4c7d-be04-8b66446e0e06 00:36:37.596 14:55:46 -- ftl/common.sh@108 -- # [[ -z 2b05d72c-c67c-4c7d-be04-8b66446e0e06 ]] 00:36:37.596 14:55:46 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 2b05d72c-c67c-4c7d-be04-8b66446e0e06 5120 00:36:37.596 14:55:46 -- ftl/common.sh@35 -- # local name=cache 00:36:37.596 14:55:46 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:36:37.596 14:55:46 -- ftl/common.sh@37 -- # local base_bdev=2b05d72c-c67c-4c7d-be04-8b66446e0e06 00:36:37.596 14:55:46 -- ftl/common.sh@38 -- # local cache_size=5120 00:36:37.596 14:55:46 -- ftl/common.sh@41 -- # get_bdev_size 2b05d72c-c67c-4c7d-be04-8b66446e0e06 00:36:37.596 14:55:46 -- common/autotest_common.sh@1364 -- # local bdev_name=2b05d72c-c67c-4c7d-be04-8b66446e0e06 00:36:37.596 14:55:46 -- common/autotest_common.sh@1365 -- # local bdev_info 00:36:37.596 14:55:46 -- common/autotest_common.sh@1366 -- # local bs 00:36:37.596 14:55:46 -- common/autotest_common.sh@1367 -- # local nb 00:36:37.596 14:55:46 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2b05d72c-c67c-4c7d-be04-8b66446e0e06 00:36:37.854 14:55:46 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:36:37.854 { 00:36:37.854 "name": "2b05d72c-c67c-4c7d-be04-8b66446e0e06", 00:36:37.854 "aliases": [ 00:36:37.854 "lvs/basen1p0" 00:36:37.854 ], 00:36:37.854 "product_name": "Logical Volume", 00:36:37.854 "block_size": 4096, 00:36:37.854 "num_blocks": 5242880, 00:36:37.854 "uuid": "2b05d72c-c67c-4c7d-be04-8b66446e0e06", 00:36:37.854 "assigned_rate_limits": { 00:36:37.854 "rw_ios_per_sec": 0, 00:36:37.854 "rw_mbytes_per_sec": 0, 00:36:37.854 "r_mbytes_per_sec": 0, 00:36:37.854 "w_mbytes_per_sec": 0 00:36:37.854 }, 00:36:37.854 "claimed": false, 00:36:37.854 "zoned": false, 00:36:37.854 "supported_io_types": { 00:36:37.854 "read": true, 00:36:37.854 "write": true, 00:36:37.854 "unmap": true, 00:36:37.854 "write_zeroes": true, 00:36:37.854 "flush": false, 00:36:37.854 "reset": true, 00:36:37.854 "compare": false, 00:36:37.854 "compare_and_write": false, 00:36:37.854 "abort": false, 00:36:37.854 "nvme_admin": false, 00:36:37.854 "nvme_io": false 00:36:37.854 }, 00:36:37.854 "driver_specific": { 00:36:37.854 "lvol": { 00:36:37.854 "lvol_store_uuid": "ad45aea4-f263-4b99-a515-7787d4dc0f0b", 00:36:37.854 "base_bdev": "basen1", 00:36:37.854 "thin_provision": true, 00:36:37.854 "snapshot": false, 00:36:37.854 "clone": false, 00:36:37.854 "esnap_clone": false 00:36:37.854 } 00:36:37.854 } 00:36:37.854 } 00:36:37.854 ]' 00:36:37.854 14:55:46 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:36:37.854 14:55:46 -- common/autotest_common.sh@1369 -- # bs=4096 00:36:37.854 14:55:46 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:36:37.854 14:55:46 -- common/autotest_common.sh@1370 -- # nb=5242880 00:36:37.854 14:55:46 -- common/autotest_common.sh@1373 -- # bdev_size=20480 00:36:37.854 14:55:46 -- common/autotest_common.sh@1374 -- # echo 20480 00:36:37.854 14:55:46 -- ftl/common.sh@41 -- # local base_size=1024 00:36:37.854 14:55:46 -- ftl/common.sh@44 -- # local nvc_bdev 00:36:37.854 14:55:46 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:36:38.112 14:55:46 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:36:38.112 14:55:46 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:36:38.112 14:55:46 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:36:38.370 14:55:46 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:36:38.370 14:55:46 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:36:38.370 14:55:46 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 2b05d72c-c67c-4c7d-be04-8b66446e0e06 -c cachen1p0 --l2p_dram_limit 2 00:36:38.639 [2024-04-17 14:55:47.114154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.639 [2024-04-17 14:55:47.114211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:38.639 [2024-04-17 14:55:47.114235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:36:38.639 [2024-04-17 14:55:47.114247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.639 [2024-04-17 14:55:47.114310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.639 [2024-04-17 14:55:47.114324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:38.639 [2024-04-17 14:55:47.114342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:36:38.639 [2024-04-17 14:55:47.114353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.639 [2024-04-17 14:55:47.114420] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:38.639 [2024-04-17 14:55:47.115819] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:38.639 [2024-04-17 14:55:47.115858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.639 [2024-04-17 14:55:47.115871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:38.639 [2024-04-17 14:55:47.115890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.473 ms 00:36:38.639 [2024-04-17 14:55:47.115904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.639 [2024-04-17 14:55:47.116037] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 2c9d9d3d-ae6d-4354-8d46-21872afd20a6 00:36:38.639 [2024-04-17 14:55:47.117551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.639 [2024-04-17 14:55:47.117594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:36:38.639 [2024-04-17 14:55:47.117609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:36:38.639 [2024-04-17 14:55:47.117624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.639 [2024-04-17 14:55:47.128089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.639 [2024-04-17 14:55:47.128203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:38.639 [2024-04-17 14:55:47.128249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.405 ms 00:36:38.639 [2024-04-17 14:55:47.128291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.639 [2024-04-17 14:55:47.128436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.639 [2024-04-17 14:55:47.128516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:38.639 [2024-04-17 14:55:47.128554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:36:38.639 [2024-04-17 14:55:47.128595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.639 [2024-04-17 14:55:47.128786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.639 [2024-04-17 14:55:47.128850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:38.639 [2024-04-17 14:55:47.128885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:36:38.639 [2024-04-17 14:55:47.128930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.639 [2024-04-17 14:55:47.129004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:38.639 [2024-04-17 14:55:47.136015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.639 [2024-04-17 14:55:47.136048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:38.639 [2024-04-17 14:55:47.136064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.026 ms 00:36:38.639 [2024-04-17 14:55:47.136076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.639 [2024-04-17 14:55:47.136115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.639 [2024-04-17 14:55:47.136127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:38.639 [2024-04-17 14:55:47.136141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:38.639 [2024-04-17 14:55:47.136152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.639 [2024-04-17 14:55:47.136196] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:36:38.639 [2024-04-17 14:55:47.136311] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:36:38.639 [2024-04-17 14:55:47.136329] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:38.639 [2024-04-17 14:55:47.136343] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:36:38.639 [2024-04-17 14:55:47.136360] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:38.639 [2024-04-17 14:55:47.136376] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:38.639 [2024-04-17 14:55:47.136393] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:38.639 [2024-04-17 14:55:47.136405] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:38.639 [2024-04-17 14:55:47.136418] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:36:38.639 [2024-04-17 14:55:47.136429] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:36:38.639 [2024-04-17 14:55:47.136444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.639 [2024-04-17 14:55:47.136455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:38.639 [2024-04-17 14:55:47.136469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.248 ms 00:36:38.639 [2024-04-17 14:55:47.136480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.639 [2024-04-17 14:55:47.136570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.639 [2024-04-17 14:55:47.136582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:38.639 [2024-04-17 14:55:47.136598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:36:38.639 [2024-04-17 14:55:47.136609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.639 [2024-04-17 14:55:47.136684] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:38.639 [2024-04-17 14:55:47.136697] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:38.639 [2024-04-17 14:55:47.136711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:38.639 [2024-04-17 14:55:47.136722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:38.639 [2024-04-17 14:55:47.136739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:38.639 [2024-04-17 14:55:47.136750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:38.639 [2024-04-17 14:55:47.136764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:38.639 [2024-04-17 14:55:47.136774] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:38.639 [2024-04-17 14:55:47.136787] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:38.639 [2024-04-17 14:55:47.136797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:38.639 [2024-04-17 14:55:47.136810] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:38.639 [2024-04-17 14:55:47.136821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:38.639 [2024-04-17 14:55:47.136834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:38.639 [2024-04-17 14:55:47.136845] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:38.639 [2024-04-17 14:55:47.136860] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:36:38.639 [2024-04-17 14:55:47.136871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:38.639 [2024-04-17 14:55:47.136883] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:38.640 [2024-04-17 14:55:47.136893] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:36:38.640 [2024-04-17 14:55:47.136906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:38.640 [2024-04-17 14:55:47.136916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:36:38.640 [2024-04-17 14:55:47.136931] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:36:38.640 [2024-04-17 14:55:47.136942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:36:38.640 [2024-04-17 14:55:47.136955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:38.640 [2024-04-17 14:55:47.136965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:38.640 [2024-04-17 14:55:47.136978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:36:38.640 [2024-04-17 14:55:47.136988] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:38.640 [2024-04-17 14:55:47.137000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:36:38.640 [2024-04-17 14:55:47.137011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:36:38.640 [2024-04-17 14:55:47.137023] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:38.640 [2024-04-17 14:55:47.137034] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:38.640 [2024-04-17 14:55:47.137052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:36:38.640 [2024-04-17 14:55:47.137062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:38.640 [2024-04-17 14:55:47.137074] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:36:38.640 [2024-04-17 14:55:47.137084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:36:38.640 [2024-04-17 14:55:47.137097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:38.640 [2024-04-17 14:55:47.137108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:38.640 [2024-04-17 14:55:47.137123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:38.640 [2024-04-17 14:55:47.137133] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:38.640 [2024-04-17 14:55:47.137145] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:36:38.640 [2024-04-17 14:55:47.137156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:38.640 [2024-04-17 14:55:47.137170] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:38.640 [2024-04-17 14:55:47.137181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:38.640 [2024-04-17 14:55:47.137195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:38.640 [2024-04-17 14:55:47.137208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:38.640 [2024-04-17 14:55:47.137222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:38.640 [2024-04-17 14:55:47.137233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:38.640 [2024-04-17 14:55:47.137246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:38.640 [2024-04-17 14:55:47.137256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:38.640 [2024-04-17 14:55:47.137269] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:38.640 [2024-04-17 14:55:47.137279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:38.640 [2024-04-17 14:55:47.137294] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:38.640 [2024-04-17 14:55:47.137308] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:38.640 [2024-04-17 14:55:47.137326] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:38.640 [2024-04-17 14:55:47.137338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:36:38.640 [2024-04-17 14:55:47.137353] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:36:38.640 [2024-04-17 14:55:47.137364] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:36:38.640 [2024-04-17 14:55:47.137378] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:36:38.640 [2024-04-17 14:55:47.137390] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:36:38.640 [2024-04-17 14:55:47.137405] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:36:38.640 [2024-04-17 14:55:47.137416] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:36:38.640 [2024-04-17 14:55:47.137430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:36:38.640 [2024-04-17 14:55:47.137442] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:36:38.640 [2024-04-17 14:55:47.137456] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:36:38.640 [2024-04-17 14:55:47.137468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:36:38.640 [2024-04-17 14:55:47.137482] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:36:38.640 [2024-04-17 14:55:47.137505] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:38.640 [2024-04-17 14:55:47.137522] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:38.640 [2024-04-17 14:55:47.137536] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:38.640 [2024-04-17 14:55:47.137553] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:38.640 [2024-04-17 14:55:47.137564] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:38.640 [2024-04-17 14:55:47.137579] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:38.640 [2024-04-17 14:55:47.137591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.640 [2024-04-17 14:55:47.137605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:38.640 [2024-04-17 14:55:47.137616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.949 ms 00:36:38.640 [2024-04-17 14:55:47.137630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.640 [2024-04-17 14:55:47.162972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.640 [2024-04-17 14:55:47.163014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:38.640 [2024-04-17 14:55:47.163029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.291 ms 00:36:38.640 [2024-04-17 14:55:47.163043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.640 [2024-04-17 14:55:47.163088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.640 [2024-04-17 14:55:47.163105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:38.640 [2024-04-17 14:55:47.163118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:36:38.640 [2024-04-17 14:55:47.163130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.640 [2024-04-17 14:55:47.221330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.640 [2024-04-17 14:55:47.221430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:38.640 [2024-04-17 14:55:47.221456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 58.133 ms 00:36:38.640 [2024-04-17 14:55:47.221477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.640 [2024-04-17 14:55:47.221565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.640 [2024-04-17 14:55:47.221590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:38.640 [2024-04-17 14:55:47.221607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:38.640 [2024-04-17 14:55:47.221632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.640 [2024-04-17 14:55:47.222257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.640 [2024-04-17 14:55:47.222296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:38.640 [2024-04-17 14:55:47.222315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:36:38.640 [2024-04-17 14:55:47.222333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.640 [2024-04-17 14:55:47.222429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.640 [2024-04-17 14:55:47.222454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:38.640 [2024-04-17 14:55:47.222481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:36:38.640 [2024-04-17 14:55:47.222501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.899 [2024-04-17 14:55:47.245702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.899 [2024-04-17 14:55:47.245778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:38.899 [2024-04-17 14:55:47.245801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.147 ms 00:36:38.899 [2024-04-17 14:55:47.245824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.899 [2024-04-17 14:55:47.260227] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:38.899 [2024-04-17 14:55:47.261525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.899 [2024-04-17 14:55:47.261570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:38.899 [2024-04-17 14:55:47.261597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.524 ms 00:36:38.899 [2024-04-17 14:55:47.261614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.899 [2024-04-17 14:55:47.296070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:38.899 [2024-04-17 14:55:47.296129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:36:38.899 [2024-04-17 14:55:47.296150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 34.393 ms 00:36:38.899 [2024-04-17 14:55:47.296161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:38.899 [2024-04-17 14:55:47.296202] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:36:38.899 [2024-04-17 14:55:47.296217] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:36:41.606 [2024-04-17 14:55:49.690158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.606 [2024-04-17 14:55:49.690226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:36:41.606 [2024-04-17 14:55:49.690251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2393.940 ms 00:36:41.606 [2024-04-17 14:55:49.690280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.606 [2024-04-17 14:55:49.690428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.606 [2024-04-17 14:55:49.690449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:41.606 [2024-04-17 14:55:49.690465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:36:41.606 [2024-04-17 14:55:49.690476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.606 [2024-04-17 14:55:49.732658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.606 [2024-04-17 14:55:49.732719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:36:41.606 [2024-04-17 14:55:49.732740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 42.105 ms 00:36:41.606 [2024-04-17 14:55:49.732779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.606 [2024-04-17 14:55:49.779013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.606 [2024-04-17 14:55:49.779080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:36:41.606 [2024-04-17 14:55:49.779101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 46.170 ms 00:36:41.606 [2024-04-17 14:55:49.779114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.606 [2024-04-17 14:55:49.779677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.606 [2024-04-17 14:55:49.779701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:41.606 [2024-04-17 14:55:49.779720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.498 ms 00:36:41.606 [2024-04-17 14:55:49.779738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.606 [2024-04-17 14:55:49.892159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.606 [2024-04-17 14:55:49.892245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:36:41.606 [2024-04-17 14:55:49.892268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 112.346 ms 00:36:41.606 [2024-04-17 14:55:49.892280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.606 [2024-04-17 14:55:49.939183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.606 [2024-04-17 14:55:49.939251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:36:41.606 [2024-04-17 14:55:49.939273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 46.830 ms 00:36:41.606 [2024-04-17 14:55:49.939286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.606 [2024-04-17 14:55:49.941825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.606 [2024-04-17 14:55:49.941859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:36:41.606 [2024-04-17 14:55:49.941875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.471 ms 00:36:41.606 [2024-04-17 14:55:49.941897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.606 [2024-04-17 14:55:49.989192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.606 [2024-04-17 14:55:49.989253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:36:41.606 [2024-04-17 14:55:49.989278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 47.212 ms 00:36:41.606 [2024-04-17 14:55:49.989290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.606 [2024-04-17 14:55:49.989357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.606 [2024-04-17 14:55:49.989371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:41.606 [2024-04-17 14:55:49.989388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:36:41.606 [2024-04-17 14:55:49.989402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.606 [2024-04-17 14:55:49.989547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:41.606 [2024-04-17 14:55:49.989562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:41.606 [2024-04-17 14:55:49.989578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:36:41.606 [2024-04-17 14:55:49.989589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:41.606 [2024-04-17 14:55:49.990983] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2876.174 ms, result 0 00:36:41.606 { 00:36:41.606 "name": "ftl", 00:36:41.606 "uuid": "2c9d9d3d-ae6d-4354-8d46-21872afd20a6" 00:36:41.606 } 00:36:41.606 14:55:50 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:36:41.864 [2024-04-17 14:55:50.286073] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:41.864 14:55:50 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:36:42.122 14:55:50 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:36:42.379 [2024-04-17 14:55:50.838726] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:36:42.379 14:55:50 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:36:42.637 [2024-04-17 14:55:51.159020] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:42.637 14:55:51 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:36:43.202 Fill FTL, iteration 1 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:36:43.202 14:55:51 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:36:43.202 14:55:51 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:43.202 14:55:51 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:43.202 14:55:51 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:43.202 14:55:51 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:36:43.202 14:55:51 -- ftl/common.sh@163 -- # spdk_ini_pid=83791 00:36:43.202 14:55:51 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:36:43.202 14:55:51 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:36:43.202 14:55:51 -- ftl/common.sh@165 -- # waitforlisten 83791 /var/tmp/spdk.tgt.sock 00:36:43.202 14:55:51 -- common/autotest_common.sh@817 -- # '[' -z 83791 ']' 00:36:43.202 14:55:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:36:43.202 14:55:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:43.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:36:43.202 14:55:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:36:43.202 14:55:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:43.202 14:55:51 -- common/autotest_common.sh@10 -- # set +x 00:36:43.202 [2024-04-17 14:55:51.699280] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:36:43.202 [2024-04-17 14:55:51.699416] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83791 ] 00:36:43.459 [2024-04-17 14:55:51.871464] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.743 [2024-04-17 14:55:52.171685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.699 14:55:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:44.699 14:55:53 -- common/autotest_common.sh@850 -- # return 0 00:36:44.699 14:55:53 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:36:45.264 ftln1 00:36:45.264 14:55:53 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:36:45.264 14:55:53 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:36:45.522 14:55:53 -- ftl/common.sh@173 -- # echo ']}' 00:36:45.522 14:55:53 -- ftl/common.sh@176 -- # killprocess 83791 00:36:45.522 14:55:53 -- common/autotest_common.sh@936 -- # '[' -z 83791 ']' 00:36:45.522 14:55:53 -- common/autotest_common.sh@940 -- # kill -0 83791 00:36:45.522 14:55:53 -- common/autotest_common.sh@941 -- # uname 00:36:45.522 14:55:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:45.522 14:55:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83791 00:36:45.522 14:55:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:36:45.522 14:55:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:36:45.522 killing process with pid 83791 00:36:45.522 14:55:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83791' 00:36:45.522 14:55:53 -- common/autotest_common.sh@955 -- # kill 83791 00:36:45.522 14:55:53 -- common/autotest_common.sh@960 -- # wait 83791 00:36:48.819 14:55:56 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:36:48.819 14:55:56 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:36:48.819 [2024-04-17 14:55:56.910736] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:36:48.819 [2024-04-17 14:55:56.911287] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83855 ] 00:36:48.819 [2024-04-17 14:55:57.093539] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.819 [2024-04-17 14:55:57.370212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:55.304  Copying: 233/1024 [MB] (233 MBps) Copying: 475/1024 [MB] (242 MBps) Copying: 710/1024 [MB] (235 MBps) Copying: 939/1024 [MB] (229 MBps) Copying: 1024/1024 [MB] (average 234 MBps) 00:36:55.304 00:36:55.304 Calculate MD5 checksum, iteration 1 00:36:55.304 14:56:03 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:36:55.304 14:56:03 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:36:55.304 14:56:03 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:55.304 14:56:03 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:55.304 14:56:03 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:55.304 14:56:03 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:55.304 14:56:03 -- ftl/common.sh@154 -- # return 0 00:36:55.304 14:56:03 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:55.304 [2024-04-17 14:56:03.781993] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:36:55.304 [2024-04-17 14:56:03.782231] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83925 ] 00:36:55.562 [2024-04-17 14:56:03.986560] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.829 [2024-04-17 14:56:04.252324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.295  Copying: 552/1024 [MB] (552 MBps) Copying: 1024/1024 [MB] (average 521 MBps) 00:37:00.295 00:37:00.295 14:56:08 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:37:00.295 14:56:08 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:02.195 14:56:10 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:37:02.195 Fill FTL, iteration 2 00:37:02.195 14:56:10 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6f083a302d44d26ccfa470ac95126161 00:37:02.195 14:56:10 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:37:02.195 14:56:10 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:02.195 14:56:10 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:37:02.195 14:56:10 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:37:02.195 14:56:10 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:02.195 14:56:10 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:02.195 14:56:10 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:02.195 14:56:10 -- ftl/common.sh@154 -- # return 0 00:37:02.195 14:56:10 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:37:02.195 [2024-04-17 14:56:10.519206] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:37:02.195 [2024-04-17 14:56:10.520055] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83997 ] 00:37:02.195 [2024-04-17 14:56:10.690186] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.452 [2024-04-17 14:56:11.029149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.100  Copying: 225/1024 [MB] (225 MBps) Copying: 455/1024 [MB] (230 MBps) Copying: 676/1024 [MB] (221 MBps) Copying: 900/1024 [MB] (224 MBps) Copying: 1024/1024 [MB] (average 225 MBps) 00:37:09.100 00:37:09.100 Calculate MD5 checksum, iteration 2 00:37:09.100 14:56:17 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:37:09.100 14:56:17 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:37:09.100 14:56:17 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:09.100 14:56:17 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:09.100 14:56:17 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:09.100 14:56:17 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:09.100 14:56:17 -- ftl/common.sh@154 -- # return 0 00:37:09.100 14:56:17 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:09.100 [2024-04-17 14:56:17.575468] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:37:09.100 [2024-04-17 14:56:17.575843] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84068 ] 00:37:09.357 [2024-04-17 14:56:17.747161] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.615 [2024-04-17 14:56:18.073535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.644  Copying: 579/1024 [MB] (579 MBps) Copying: 1024/1024 [MB] (average 563 MBps) 00:37:14.644 00:37:14.644 14:56:22 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:37:14.644 14:56:22 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:16.547 14:56:24 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:37:16.547 14:56:24 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fc7aa6ecffde39ca20385a8b5a2377b8 00:37:16.547 14:56:24 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:37:16.547 14:56:24 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:16.547 14:56:24 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:16.547 [2024-04-17 14:56:25.141107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.548 [2024-04-17 14:56:25.141175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:16.548 [2024-04-17 14:56:25.141193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:16.548 [2024-04-17 14:56:25.141205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.548 [2024-04-17 14:56:25.141235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.548 [2024-04-17 14:56:25.141247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:16.548 [2024-04-17 14:56:25.141263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:16.548 [2024-04-17 14:56:25.141278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.548 [2024-04-17 14:56:25.141319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:16.548 [2024-04-17 14:56:25.141335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:16.548 [2024-04-17 14:56:25.141347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:16.548 [2024-04-17 14:56:25.141358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:16.548 [2024-04-17 14:56:25.141423] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.329 ms, result 0 00:37:16.548 true 00:37:16.807 14:56:25 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:16.807 { 00:37:16.807 "name": "ftl", 00:37:16.807 "properties": [ 00:37:16.807 { 00:37:16.807 "name": "superblock_version", 00:37:16.807 "value": 5, 00:37:16.807 "read-only": true 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "name": "base_device", 00:37:16.807 "bands": [ 00:37:16.807 { 00:37:16.807 "id": 0, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 1, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 2, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 3, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 4, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 5, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 6, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 7, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 8, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 9, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 10, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 11, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 12, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 13, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 14, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 15, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 16, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 17, 00:37:16.807 "state": "FREE", 00:37:16.807 "validity": 0.0 00:37:16.807 } 00:37:16.807 ], 00:37:16.807 "read-only": true 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "name": "cache_device", 00:37:16.807 "type": "bdev", 00:37:16.807 "chunks": [ 00:37:16.807 { 00:37:16.807 "id": 0, 00:37:16.807 "state": "CLOSED", 00:37:16.807 "utilization": 1.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 1, 00:37:16.807 "state": "CLOSED", 00:37:16.807 "utilization": 1.0 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 2, 00:37:16.807 "state": "OPEN", 00:37:16.807 "utilization": 0.001953125 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "id": 3, 00:37:16.807 "state": "OPEN", 00:37:16.807 "utilization": 0.0 00:37:16.807 } 00:37:16.807 ], 00:37:16.807 "read-only": true 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "name": "verbose_mode", 00:37:16.807 "value": true, 00:37:16.807 "unit": "", 00:37:16.807 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:16.807 }, 00:37:16.807 { 00:37:16.807 "name": "prep_upgrade_on_shutdown", 00:37:16.807 "value": false, 00:37:16.807 "unit": "", 00:37:16.807 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:16.807 } 00:37:16.807 ] 00:37:16.807 } 00:37:16.807 14:56:25 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:37:17.067 [2024-04-17 14:56:25.551110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:17.067 [2024-04-17 14:56:25.551175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:17.067 [2024-04-17 14:56:25.551193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:17.067 [2024-04-17 14:56:25.551206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:17.067 [2024-04-17 14:56:25.551238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:17.067 [2024-04-17 14:56:25.551251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:17.067 [2024-04-17 14:56:25.551263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:17.067 [2024-04-17 14:56:25.551275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:17.067 [2024-04-17 14:56:25.551298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:17.067 [2024-04-17 14:56:25.551310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:17.067 [2024-04-17 14:56:25.551322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:17.067 [2024-04-17 14:56:25.551333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:17.067 [2024-04-17 14:56:25.551400] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.282 ms, result 0 00:37:17.067 true 00:37:17.067 14:56:25 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:37:17.067 14:56:25 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:37:17.067 14:56:25 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:17.328 14:56:25 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:37:17.328 14:56:25 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:37:17.328 14:56:25 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:17.588 [2024-04-17 14:56:26.051659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:17.588 [2024-04-17 14:56:26.051723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:17.588 [2024-04-17 14:56:26.051741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:37:17.588 [2024-04-17 14:56:26.051753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:17.588 [2024-04-17 14:56:26.051783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:17.588 [2024-04-17 14:56:26.051795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:17.588 [2024-04-17 14:56:26.051808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:17.588 [2024-04-17 14:56:26.051819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:17.588 [2024-04-17 14:56:26.051843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:17.588 [2024-04-17 14:56:26.051855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:17.588 [2024-04-17 14:56:26.051866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:17.588 [2024-04-17 14:56:26.051878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:17.588 [2024-04-17 14:56:26.051941] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.274 ms, result 0 00:37:17.588 true 00:37:17.588 14:56:26 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:17.848 { 00:37:17.848 "name": "ftl", 00:37:17.848 "properties": [ 00:37:17.848 { 00:37:17.848 "name": "superblock_version", 00:37:17.848 "value": 5, 00:37:17.848 "read-only": true 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "name": "base_device", 00:37:17.848 "bands": [ 00:37:17.848 { 00:37:17.848 "id": 0, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 1, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 2, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 3, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 4, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 5, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 6, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 7, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 8, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 9, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 10, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 11, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 12, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 13, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 14, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 15, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 16, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 17, 00:37:17.848 "state": "FREE", 00:37:17.848 "validity": 0.0 00:37:17.848 } 00:37:17.848 ], 00:37:17.848 "read-only": true 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "name": "cache_device", 00:37:17.848 "type": "bdev", 00:37:17.848 "chunks": [ 00:37:17.848 { 00:37:17.848 "id": 0, 00:37:17.848 "state": "CLOSED", 00:37:17.848 "utilization": 1.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 1, 00:37:17.848 "state": "CLOSED", 00:37:17.848 "utilization": 1.0 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 2, 00:37:17.848 "state": "OPEN", 00:37:17.848 "utilization": 0.001953125 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "id": 3, 00:37:17.848 "state": "OPEN", 00:37:17.848 "utilization": 0.0 00:37:17.848 } 00:37:17.848 ], 00:37:17.848 "read-only": true 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "name": "verbose_mode", 00:37:17.848 "value": true, 00:37:17.848 "unit": "", 00:37:17.848 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:17.848 }, 00:37:17.848 { 00:37:17.848 "name": "prep_upgrade_on_shutdown", 00:37:17.848 "value": true, 00:37:17.848 "unit": "", 00:37:17.848 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:17.848 } 00:37:17.848 ] 00:37:17.848 } 00:37:17.848 14:56:26 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:37:17.848 14:56:26 -- ftl/common.sh@130 -- # [[ -n 83661 ]] 00:37:17.848 14:56:26 -- ftl/common.sh@131 -- # killprocess 83661 00:37:17.848 14:56:26 -- common/autotest_common.sh@936 -- # '[' -z 83661 ']' 00:37:17.848 14:56:26 -- common/autotest_common.sh@940 -- # kill -0 83661 00:37:17.848 14:56:26 -- common/autotest_common.sh@941 -- # uname 00:37:17.848 14:56:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:37:17.848 14:56:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83661 00:37:17.848 14:56:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:37:17.848 14:56:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:37:17.848 14:56:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83661' 00:37:17.848 killing process with pid 83661 00:37:17.848 14:56:26 -- common/autotest_common.sh@955 -- # kill 83661 00:37:17.848 14:56:26 -- common/autotest_common.sh@960 -- # wait 83661 00:37:19.247 [2024-04-17 14:56:27.613865] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:37:19.247 [2024-04-17 14:56:27.644984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:19.247 [2024-04-17 14:56:27.645042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:37:19.247 [2024-04-17 14:56:27.645075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:19.247 [2024-04-17 14:56:27.645087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:19.247 [2024-04-17 14:56:27.645112] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:37:19.247 [2024-04-17 14:56:27.649401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:19.247 [2024-04-17 14:56:27.649435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:37:19.247 [2024-04-17 14:56:27.649451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.271 ms 00:37:19.247 [2024-04-17 14:56:27.649463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.453637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.453726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:37:27.457 [2024-04-17 14:56:35.453755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7804.072 ms 00:37:27.457 [2024-04-17 14:56:35.453768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.508348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.508438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:37:27.457 [2024-04-17 14:56:35.508455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 54.552 ms 00:37:27.457 [2024-04-17 14:56:35.508467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.509605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.509634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:37:27.457 [2024-04-17 14:56:35.509647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.073 ms 00:37:27.457 [2024-04-17 14:56:35.509659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.527385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.527451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:37:27.457 [2024-04-17 14:56:35.527467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.664 ms 00:37:27.457 [2024-04-17 14:56:35.527478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.539278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.539337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:37:27.457 [2024-04-17 14:56:35.539353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.742 ms 00:37:27.457 [2024-04-17 14:56:35.539365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.539487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.539523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:37:27.457 [2024-04-17 14:56:35.539536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:37:27.457 [2024-04-17 14:56:35.539548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.557173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.557223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:37:27.457 [2024-04-17 14:56:35.557238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.598 ms 00:37:27.457 [2024-04-17 14:56:35.557249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.573873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.573931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:37:27.457 [2024-04-17 14:56:35.573946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.579 ms 00:37:27.457 [2024-04-17 14:56:35.573958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.591431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.591478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:37:27.457 [2024-04-17 14:56:35.591501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.428 ms 00:37:27.457 [2024-04-17 14:56:35.591524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.608585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.608629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:37:27.457 [2024-04-17 14:56:35.608644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.990 ms 00:37:27.457 [2024-04-17 14:56:35.608656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.608694] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:37:27.457 [2024-04-17 14:56:35.608715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:37:27.457 [2024-04-17 14:56:35.608729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:37:27.457 [2024-04-17 14:56:35.608741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:37:27.457 [2024-04-17 14:56:35.608754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:27.457 [2024-04-17 14:56:35.608952] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:37:27.457 [2024-04-17 14:56:35.608963] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2c9d9d3d-ae6d-4354-8d46-21872afd20a6 00:37:27.457 [2024-04-17 14:56:35.608975] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:37:27.457 [2024-04-17 14:56:35.608986] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:37:27.457 [2024-04-17 14:56:35.608997] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:37:27.457 [2024-04-17 14:56:35.609018] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:37:27.457 [2024-04-17 14:56:35.609029] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:37:27.457 [2024-04-17 14:56:35.609041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:37:27.457 [2024-04-17 14:56:35.609052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:37:27.457 [2024-04-17 14:56:35.609062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:37:27.457 [2024-04-17 14:56:35.609072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:37:27.457 [2024-04-17 14:56:35.609083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.609095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:37:27.457 [2024-04-17 14:56:35.609112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.391 ms 00:37:27.457 [2024-04-17 14:56:35.609123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.632146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.457 [2024-04-17 14:56:35.632202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:37:27.457 [2024-04-17 14:56:35.632235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.987 ms 00:37:27.457 [2024-04-17 14:56:35.632247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.457 [2024-04-17 14:56:35.632584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:27.458 [2024-04-17 14:56:35.632617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:37:27.458 [2024-04-17 14:56:35.632630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.292 ms 00:37:27.458 [2024-04-17 14:56:35.632642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.709892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.709957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:27.458 [2024-04-17 14:56:35.709974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.709986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.710046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.710059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:27.458 [2024-04-17 14:56:35.710070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.710081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.710181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.710195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:27.458 [2024-04-17 14:56:35.710206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.710217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.710237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.710270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:27.458 [2024-04-17 14:56:35.710282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.710294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.844185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.844248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:27.458 [2024-04-17 14:56:35.844293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.844304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.893088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.893154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:27.458 [2024-04-17 14:56:35.893168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.893179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.893264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.893276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:27.458 [2024-04-17 14:56:35.893287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.893297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.893341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.893353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:27.458 [2024-04-17 14:56:35.893363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.893377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.893484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.893514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:27.458 [2024-04-17 14:56:35.893525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.893536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.893592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.893605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:37:27.458 [2024-04-17 14:56:35.893616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.893632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.893672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.893683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:27.458 [2024-04-17 14:56:35.893695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.893706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.893752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:27.458 [2024-04-17 14:56:35.893765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:27.458 [2024-04-17 14:56:35.893776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:27.458 [2024-04-17 14:56:35.893790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:27.458 [2024-04-17 14:56:35.893916] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8248.860 ms, result 0 00:37:34.095 14:56:41 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:37:34.095 14:56:41 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:37:34.095 14:56:41 -- ftl/common.sh@81 -- # local base_bdev= 00:37:34.095 14:56:41 -- ftl/common.sh@82 -- # local cache_bdev= 00:37:34.095 14:56:41 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:34.095 14:56:41 -- ftl/common.sh@89 -- # spdk_tgt_pid=84289 00:37:34.095 14:56:41 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:34.095 14:56:41 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:34.095 14:56:41 -- ftl/common.sh@91 -- # waitforlisten 84289 00:37:34.095 14:56:41 -- common/autotest_common.sh@817 -- # '[' -z 84289 ']' 00:37:34.095 14:56:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:34.095 14:56:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:37:34.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:34.095 14:56:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:34.095 14:56:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:37:34.095 14:56:41 -- common/autotest_common.sh@10 -- # set +x 00:37:34.095 [2024-04-17 14:56:41.681479] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:37:34.095 [2024-04-17 14:56:41.681634] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84289 ] 00:37:34.095 [2024-04-17 14:56:41.848600] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.095 [2024-04-17 14:56:42.110953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.049 [2024-04-17 14:56:43.285762] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:35.049 [2024-04-17 14:56:43.285831] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:35.049 [2024-04-17 14:56:43.433227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.433296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:35.049 [2024-04-17 14:56:43.433331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:35.049 [2024-04-17 14:56:43.433344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.433415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.433430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:35.049 [2024-04-17 14:56:43.433442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:37:35.049 [2024-04-17 14:56:43.433454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.433493] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:35.049 [2024-04-17 14:56:43.434912] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:35.049 [2024-04-17 14:56:43.434951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.434965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:35.049 [2024-04-17 14:56:43.434982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.474 ms 00:37:35.049 [2024-04-17 14:56:43.434995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.436750] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:37:35.049 [2024-04-17 14:56:43.461042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.461122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:37:35.049 [2024-04-17 14:56:43.461141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.291 ms 00:37:35.049 [2024-04-17 14:56:43.461153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.461273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.461288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:37:35.049 [2024-04-17 14:56:43.461300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:37:35.049 [2024-04-17 14:56:43.461314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.469096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.469137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:35.049 [2024-04-17 14:56:43.469152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.658 ms 00:37:35.049 [2024-04-17 14:56:43.469163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.469237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.469253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:35.049 [2024-04-17 14:56:43.469269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:37:35.049 [2024-04-17 14:56:43.469281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.469342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.469355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:35.049 [2024-04-17 14:56:43.469367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:37:35.049 [2024-04-17 14:56:43.469378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.469410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:35.049 [2024-04-17 14:56:43.476172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.476243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:35.049 [2024-04-17 14:56:43.476278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.767 ms 00:37:35.049 [2024-04-17 14:56:43.476290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.476331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.476344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:35.049 [2024-04-17 14:56:43.476369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:35.049 [2024-04-17 14:56:43.476380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.476459] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:37:35.049 [2024-04-17 14:56:43.476488] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:37:35.049 [2024-04-17 14:56:43.476546] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:37:35.049 [2024-04-17 14:56:43.476571] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:37:35.049 [2024-04-17 14:56:43.476648] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:37:35.049 [2024-04-17 14:56:43.476667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:35.049 [2024-04-17 14:56:43.476682] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:37:35.049 [2024-04-17 14:56:43.476697] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:35.049 [2024-04-17 14:56:43.476710] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:35.049 [2024-04-17 14:56:43.476723] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:35.049 [2024-04-17 14:56:43.476734] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:35.049 [2024-04-17 14:56:43.476746] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:37:35.049 [2024-04-17 14:56:43.476757] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:37:35.049 [2024-04-17 14:56:43.476770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.476782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:35.049 [2024-04-17 14:56:43.476794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.313 ms 00:37:35.049 [2024-04-17 14:56:43.476808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.476881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.049 [2024-04-17 14:56:43.476894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:35.049 [2024-04-17 14:56:43.476906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:37:35.049 [2024-04-17 14:56:43.476917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.049 [2024-04-17 14:56:43.477005] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:35.049 [2024-04-17 14:56:43.477037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:35.049 [2024-04-17 14:56:43.477058] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:35.049 [2024-04-17 14:56:43.477071] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:35.049 [2024-04-17 14:56:43.477088] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:35.049 [2024-04-17 14:56:43.477099] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:35.049 [2024-04-17 14:56:43.477110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:35.049 [2024-04-17 14:56:43.477121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:35.049 [2024-04-17 14:56:43.477132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:35.049 [2024-04-17 14:56:43.477143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:35.049 [2024-04-17 14:56:43.477154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:35.049 [2024-04-17 14:56:43.477165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:35.049 [2024-04-17 14:56:43.477175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:35.049 [2024-04-17 14:56:43.477187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:35.049 [2024-04-17 14:56:43.477198] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:37:35.049 [2024-04-17 14:56:43.477208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:35.049 [2024-04-17 14:56:43.477219] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:35.049 [2024-04-17 14:56:43.477229] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:37:35.050 [2024-04-17 14:56:43.477240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:35.050 [2024-04-17 14:56:43.477251] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:37:35.050 [2024-04-17 14:56:43.477261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:37:35.050 [2024-04-17 14:56:43.477272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:37:35.050 [2024-04-17 14:56:43.477283] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:35.050 [2024-04-17 14:56:43.477293] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:35.050 [2024-04-17 14:56:43.477303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:37:35.050 [2024-04-17 14:56:43.477315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:35.050 [2024-04-17 14:56:43.477325] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:37:35.050 [2024-04-17 14:56:43.477336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:37:35.050 [2024-04-17 14:56:43.477346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:35.050 [2024-04-17 14:56:43.477357] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:35.050 [2024-04-17 14:56:43.477367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:37:35.050 [2024-04-17 14:56:43.477378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:35.050 [2024-04-17 14:56:43.477389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:37:35.050 [2024-04-17 14:56:43.477399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:37:35.050 [2024-04-17 14:56:43.477410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:35.050 [2024-04-17 14:56:43.477421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:35.050 [2024-04-17 14:56:43.477431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:35.050 [2024-04-17 14:56:43.477442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:35.050 [2024-04-17 14:56:43.477453] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:37:35.050 [2024-04-17 14:56:43.477463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:35.050 [2024-04-17 14:56:43.477473] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:35.050 [2024-04-17 14:56:43.477485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:35.050 [2024-04-17 14:56:43.477515] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:35.050 [2024-04-17 14:56:43.477527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:35.050 [2024-04-17 14:56:43.477538] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:35.050 [2024-04-17 14:56:43.477550] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:35.050 [2024-04-17 14:56:43.477561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:35.050 [2024-04-17 14:56:43.477572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:35.050 [2024-04-17 14:56:43.477582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:35.050 [2024-04-17 14:56:43.477605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:35.050 [2024-04-17 14:56:43.477617] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:35.050 [2024-04-17 14:56:43.477632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:35.050 [2024-04-17 14:56:43.477645] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:35.050 [2024-04-17 14:56:43.477657] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:37:35.050 [2024-04-17 14:56:43.477669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:37:35.050 [2024-04-17 14:56:43.477681] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:37:35.050 [2024-04-17 14:56:43.477693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:37:35.050 [2024-04-17 14:56:43.477706] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:37:35.050 [2024-04-17 14:56:43.477717] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:37:35.050 [2024-04-17 14:56:43.477729] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:37:35.050 [2024-04-17 14:56:43.477741] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:37:35.050 [2024-04-17 14:56:43.477753] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:37:35.050 [2024-04-17 14:56:43.477766] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:37:35.050 [2024-04-17 14:56:43.477778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:37:35.050 [2024-04-17 14:56:43.477790] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:37:35.050 [2024-04-17 14:56:43.477802] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:35.050 [2024-04-17 14:56:43.477815] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:35.050 [2024-04-17 14:56:43.477827] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:35.050 [2024-04-17 14:56:43.477839] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:35.050 [2024-04-17 14:56:43.477851] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:35.050 [2024-04-17 14:56:43.477863] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:35.050 [2024-04-17 14:56:43.477877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.050 [2024-04-17 14:56:43.477888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:35.050 [2024-04-17 14:56:43.477904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.915 ms 00:37:35.050 [2024-04-17 14:56:43.477915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.050 [2024-04-17 14:56:43.506127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.050 [2024-04-17 14:56:43.506189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:35.050 [2024-04-17 14:56:43.506211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 28.146 ms 00:37:35.050 [2024-04-17 14:56:43.506223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.050 [2024-04-17 14:56:43.506287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.050 [2024-04-17 14:56:43.506300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:35.050 [2024-04-17 14:56:43.506312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:37:35.050 [2024-04-17 14:56:43.506323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.050 [2024-04-17 14:56:43.566993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.050 [2024-04-17 14:56:43.567053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:35.050 [2024-04-17 14:56:43.567071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 60.564 ms 00:37:35.050 [2024-04-17 14:56:43.567083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.050 [2024-04-17 14:56:43.567157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.050 [2024-04-17 14:56:43.567170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:35.050 [2024-04-17 14:56:43.567183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:35.050 [2024-04-17 14:56:43.567195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.050 [2024-04-17 14:56:43.567726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.050 [2024-04-17 14:56:43.567750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:35.050 [2024-04-17 14:56:43.567762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.460 ms 00:37:35.050 [2024-04-17 14:56:43.567773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.050 [2024-04-17 14:56:43.567822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.050 [2024-04-17 14:56:43.567838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:35.050 [2024-04-17 14:56:43.567849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:37:35.050 [2024-04-17 14:56:43.567859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.050 [2024-04-17 14:56:43.594329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.050 [2024-04-17 14:56:43.594401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:35.050 [2024-04-17 14:56:43.594420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.444 ms 00:37:35.050 [2024-04-17 14:56:43.594442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.050 [2024-04-17 14:56:43.618345] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:37:35.050 [2024-04-17 14:56:43.618446] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:37:35.050 [2024-04-17 14:56:43.618466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.050 [2024-04-17 14:56:43.618479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:37:35.050 [2024-04-17 14:56:43.618520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.863 ms 00:37:35.050 [2024-04-17 14:56:43.618533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.050 [2024-04-17 14:56:43.643776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.050 [2024-04-17 14:56:43.643842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:37:35.050 [2024-04-17 14:56:43.643859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.152 ms 00:37:35.050 [2024-04-17 14:56:43.643871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.309 [2024-04-17 14:56:43.666330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.309 [2024-04-17 14:56:43.666422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:37:35.309 [2024-04-17 14:56:43.666441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.381 ms 00:37:35.310 [2024-04-17 14:56:43.666453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.689765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.310 [2024-04-17 14:56:43.689830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:37:35.310 [2024-04-17 14:56:43.689848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.205 ms 00:37:35.310 [2024-04-17 14:56:43.689877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.690540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.310 [2024-04-17 14:56:43.690577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:35.310 [2024-04-17 14:56:43.690597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:37:35.310 [2024-04-17 14:56:43.690609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.802153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.310 [2024-04-17 14:56:43.802239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:37:35.310 [2024-04-17 14:56:43.802264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 111.512 ms 00:37:35.310 [2024-04-17 14:56:43.802276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.819225] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:35.310 [2024-04-17 14:56:43.820417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.310 [2024-04-17 14:56:43.820450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:35.310 [2024-04-17 14:56:43.820466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.048 ms 00:37:35.310 [2024-04-17 14:56:43.820479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.820612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.310 [2024-04-17 14:56:43.820628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:37:35.310 [2024-04-17 14:56:43.820645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:37:35.310 [2024-04-17 14:56:43.820657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.820725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.310 [2024-04-17 14:56:43.820739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:35.310 [2024-04-17 14:56:43.820752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:37:35.310 [2024-04-17 14:56:43.820763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.823353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.310 [2024-04-17 14:56:43.823396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:37:35.310 [2024-04-17 14:56:43.823410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.561 ms 00:37:35.310 [2024-04-17 14:56:43.823422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.823462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.310 [2024-04-17 14:56:43.823474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:35.310 [2024-04-17 14:56:43.823487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:35.310 [2024-04-17 14:56:43.823508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.823555] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:37:35.310 [2024-04-17 14:56:43.823570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.310 [2024-04-17 14:56:43.823581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:37:35.310 [2024-04-17 14:56:43.823594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:37:35.310 [2024-04-17 14:56:43.823605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.870388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.310 [2024-04-17 14:56:43.870463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:37:35.310 [2024-04-17 14:56:43.870483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 46.745 ms 00:37:35.310 [2024-04-17 14:56:43.870508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.870627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:35.310 [2024-04-17 14:56:43.870641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:35.310 [2024-04-17 14:56:43.870655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:37:35.310 [2024-04-17 14:56:43.870677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:35.310 [2024-04-17 14:56:43.871995] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 438.271 ms, result 0 00:37:35.310 [2024-04-17 14:56:43.886868] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:35.310 [2024-04-17 14:56:43.902859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:37:35.570 [2024-04-17 14:56:43.914368] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:35.829 14:56:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:37:35.829 14:56:44 -- common/autotest_common.sh@850 -- # return 0 00:37:35.829 14:56:44 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:35.829 14:56:44 -- ftl/common.sh@95 -- # return 0 00:37:35.829 14:56:44 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:36.087 [2024-04-17 14:56:44.637124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:36.087 [2024-04-17 14:56:44.637192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:36.087 [2024-04-17 14:56:44.637211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:36.087 [2024-04-17 14:56:44.637223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:36.087 [2024-04-17 14:56:44.637257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:36.087 [2024-04-17 14:56:44.637270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:36.087 [2024-04-17 14:56:44.637298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:36.087 [2024-04-17 14:56:44.637310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:36.087 [2024-04-17 14:56:44.637334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:36.087 [2024-04-17 14:56:44.637346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:36.087 [2024-04-17 14:56:44.637358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:36.087 [2024-04-17 14:56:44.637369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:36.087 [2024-04-17 14:56:44.637436] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.306 ms, result 0 00:37:36.087 true 00:37:36.087 14:56:44 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:36.653 { 00:37:36.653 "name": "ftl", 00:37:36.653 "properties": [ 00:37:36.653 { 00:37:36.653 "name": "superblock_version", 00:37:36.653 "value": 5, 00:37:36.653 "read-only": true 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "name": "base_device", 00:37:36.653 "bands": [ 00:37:36.653 { 00:37:36.653 "id": 0, 00:37:36.653 "state": "CLOSED", 00:37:36.653 "validity": 1.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 1, 00:37:36.653 "state": "CLOSED", 00:37:36.653 "validity": 1.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 2, 00:37:36.653 "state": "CLOSED", 00:37:36.653 "validity": 0.007843137254901933 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 3, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 4, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 5, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 6, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 7, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 8, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 9, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 10, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 11, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 12, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 13, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 14, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 15, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 16, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 17, 00:37:36.653 "state": "FREE", 00:37:36.653 "validity": 0.0 00:37:36.653 } 00:37:36.653 ], 00:37:36.653 "read-only": true 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "name": "cache_device", 00:37:36.653 "type": "bdev", 00:37:36.653 "chunks": [ 00:37:36.653 { 00:37:36.653 "id": 0, 00:37:36.653 "state": "OPEN", 00:37:36.653 "utilization": 0.0 00:37:36.653 }, 00:37:36.653 { 00:37:36.653 "id": 1, 00:37:36.653 "state": "OPEN", 00:37:36.654 "utilization": 0.0 00:37:36.654 }, 00:37:36.654 { 00:37:36.654 "id": 2, 00:37:36.654 "state": "FREE", 00:37:36.654 "utilization": 0.0 00:37:36.654 }, 00:37:36.654 { 00:37:36.654 "id": 3, 00:37:36.654 "state": "FREE", 00:37:36.654 "utilization": 0.0 00:37:36.654 } 00:37:36.654 ], 00:37:36.654 "read-only": true 00:37:36.654 }, 00:37:36.654 { 00:37:36.654 "name": "verbose_mode", 00:37:36.654 "value": true, 00:37:36.654 "unit": "", 00:37:36.654 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:36.654 }, 00:37:36.654 { 00:37:36.654 "name": "prep_upgrade_on_shutdown", 00:37:36.654 "value": false, 00:37:36.654 "unit": "", 00:37:36.654 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:36.654 } 00:37:36.654 ] 00:37:36.654 } 00:37:36.654 14:56:44 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:37:36.654 14:56:44 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:36.654 14:56:44 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:37:36.911 Validate MD5 checksum, iteration 1 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:37:36.911 14:56:45 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:36.911 14:56:45 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:36.911 14:56:45 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:36.911 14:56:45 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:36.911 14:56:45 -- ftl/common.sh@154 -- # return 0 00:37:36.911 14:56:45 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:37.170 [2024-04-17 14:56:45.609185] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:37:37.170 [2024-04-17 14:56:45.609349] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84345 ] 00:37:37.428 [2024-04-17 14:56:45.793166] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.686 [2024-04-17 14:56:46.114786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.473  Copying: 528/1024 [MB] (528 MBps) Copying: 1024/1024 [MB] (average 533 MBps) 00:37:42.473 00:37:42.473 14:56:50 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:37:42.473 14:56:50 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:44.376 14:56:52 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:44.376 Validate MD5 checksum, iteration 2 00:37:44.376 14:56:52 -- ftl/upgrade_shutdown.sh@103 -- # sum=6f083a302d44d26ccfa470ac95126161 00:37:44.376 14:56:52 -- ftl/upgrade_shutdown.sh@105 -- # [[ 6f083a302d44d26ccfa470ac95126161 != \6\f\0\8\3\a\3\0\2\d\4\4\d\2\6\c\c\f\a\4\7\0\a\c\9\5\1\2\6\1\6\1 ]] 00:37:44.376 14:56:52 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:44.376 14:56:52 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:44.376 14:56:52 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:37:44.376 14:56:52 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:44.376 14:56:52 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:44.376 14:56:52 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:44.376 14:56:52 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:44.376 14:56:52 -- ftl/common.sh@154 -- # return 0 00:37:44.376 14:56:52 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:44.376 [2024-04-17 14:56:52.882048] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:37:44.376 [2024-04-17 14:56:52.882538] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84422 ] 00:37:44.641 [2024-04-17 14:56:53.069947] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.911 [2024-04-17 14:56:53.383527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:50.662  Copying: 636/1024 [MB] (636 MBps) Copying: 1024/1024 [MB] (average 588 MBps) 00:37:50.662 00:37:50.662 14:56:59 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:37:50.662 14:56:59 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:53.192 14:57:01 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:37:53.192 14:57:01 -- ftl/upgrade_shutdown.sh@103 -- # sum=fc7aa6ecffde39ca20385a8b5a2377b8 00:37:53.192 14:57:01 -- ftl/upgrade_shutdown.sh@105 -- # [[ fc7aa6ecffde39ca20385a8b5a2377b8 != \f\c\7\a\a\6\e\c\f\f\d\e\3\9\c\a\2\0\3\8\5\a\8\b\5\a\2\3\7\7\b\8 ]] 00:37:53.192 14:57:01 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:37:53.192 14:57:01 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:53.192 14:57:01 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:37:53.192 14:57:01 -- ftl/common.sh@137 -- # [[ -n 84289 ]] 00:37:53.192 14:57:01 -- ftl/common.sh@138 -- # kill -9 84289 00:37:53.192 14:57:01 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:37:53.192 14:57:01 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:37:53.192 14:57:01 -- ftl/common.sh@81 -- # local base_bdev= 00:37:53.192 14:57:01 -- ftl/common.sh@82 -- # local cache_bdev= 00:37:53.192 14:57:01 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:53.192 14:57:01 -- ftl/common.sh@89 -- # spdk_tgt_pid=84507 00:37:53.192 14:57:01 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:53.192 14:57:01 -- ftl/common.sh@91 -- # waitforlisten 84507 00:37:53.192 14:57:01 -- common/autotest_common.sh@817 -- # '[' -z 84507 ']' 00:37:53.192 14:57:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:53.192 14:57:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:37:53.192 14:57:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:53.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:53.192 14:57:01 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:53.192 14:57:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:37:53.192 14:57:01 -- common/autotest_common.sh@10 -- # set +x 00:37:53.192 [2024-04-17 14:57:01.516774] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:37:53.192 [2024-04-17 14:57:01.516946] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84507 ] 00:37:53.192 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 816: 84289 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:37:53.192 [2024-04-17 14:57:01.701145] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.450 [2024-04-17 14:57:02.043320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.825 [2024-04-17 14:57:03.198209] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:54.825 [2024-04-17 14:57:03.198586] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:54.825 [2024-04-17 14:57:03.344791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.825 [2024-04-17 14:57:03.345060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:54.825 [2024-04-17 14:57:03.345186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:54.825 [2024-04-17 14:57:03.345238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.825 [2024-04-17 14:57:03.345367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.825 [2024-04-17 14:57:03.345500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:54.825 [2024-04-17 14:57:03.345604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:37:54.825 [2024-04-17 14:57:03.345647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.825 [2024-04-17 14:57:03.345735] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:54.825 [2024-04-17 14:57:03.347339] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:54.825 [2024-04-17 14:57:03.347565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.825 [2024-04-17 14:57:03.347668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:54.825 [2024-04-17 14:57:03.347726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.857 ms 00:37:54.825 [2024-04-17 14:57:03.347858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.825 [2024-04-17 14:57:03.348457] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:37:54.825 [2024-04-17 14:57:03.381684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.825 [2024-04-17 14:57:03.381927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:37:54.825 [2024-04-17 14:57:03.382037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 33.225 ms 00:37:54.825 [2024-04-17 14:57:03.382080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.825 [2024-04-17 14:57:03.400300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.825 [2024-04-17 14:57:03.400485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:37:54.825 [2024-04-17 14:57:03.400596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:37:54.825 [2024-04-17 14:57:03.400636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.825 [2024-04-17 14:57:03.401279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.825 [2024-04-17 14:57:03.401407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:54.825 [2024-04-17 14:57:03.401511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.484 ms 00:37:54.825 [2024-04-17 14:57:03.401620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.825 [2024-04-17 14:57:03.401711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.825 [2024-04-17 14:57:03.401791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:54.825 [2024-04-17 14:57:03.401868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:37:54.825 [2024-04-17 14:57:03.401908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.825 [2024-04-17 14:57:03.402024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.825 [2024-04-17 14:57:03.402122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:54.825 [2024-04-17 14:57:03.402222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:37:54.825 [2024-04-17 14:57:03.402263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.825 [2024-04-17 14:57:03.402385] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:54.825 [2024-04-17 14:57:03.409010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.825 [2024-04-17 14:57:03.409183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:54.825 [2024-04-17 14:57:03.409276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.633 ms 00:37:54.825 [2024-04-17 14:57:03.409317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.825 [2024-04-17 14:57:03.409393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.825 [2024-04-17 14:57:03.409440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:54.825 [2024-04-17 14:57:03.409477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:54.825 [2024-04-17 14:57:03.409551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.825 [2024-04-17 14:57:03.409633] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:37:54.825 [2024-04-17 14:57:03.409691] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:37:54.825 [2024-04-17 14:57:03.409798] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:37:54.825 [2024-04-17 14:57:03.409867] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:37:54.825 [2024-04-17 14:57:03.409991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:37:54.826 [2024-04-17 14:57:03.410071] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:54.826 [2024-04-17 14:57:03.410129] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:37:54.826 [2024-04-17 14:57:03.410189] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:54.826 [2024-04-17 14:57:03.410248] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:54.826 [2024-04-17 14:57:03.410374] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:54.826 [2024-04-17 14:57:03.410422] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:54.826 [2024-04-17 14:57:03.410457] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:37:54.826 [2024-04-17 14:57:03.410504] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:37:54.826 [2024-04-17 14:57:03.410544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.826 [2024-04-17 14:57:03.410579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:54.826 [2024-04-17 14:57:03.410615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.914 ms 00:37:54.826 [2024-04-17 14:57:03.410747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.826 [2024-04-17 14:57:03.410869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.826 [2024-04-17 14:57:03.410935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:54.826 [2024-04-17 14:57:03.411017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:37:54.826 [2024-04-17 14:57:03.411064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:54.826 [2024-04-17 14:57:03.411210] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:54.826 [2024-04-17 14:57:03.411252] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:54.826 [2024-04-17 14:57:03.411289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:54.826 [2024-04-17 14:57:03.411371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:54.826 [2024-04-17 14:57:03.411413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:54.826 [2024-04-17 14:57:03.411515] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:54.826 [2024-04-17 14:57:03.411555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:54.826 [2024-04-17 14:57:03.411624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:54.826 [2024-04-17 14:57:03.411663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:54.826 [2024-04-17 14:57:03.411697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:54.826 [2024-04-17 14:57:03.411816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:54.826 [2024-04-17 14:57:03.411856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:54.826 [2024-04-17 14:57:03.411889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:54.826 [2024-04-17 14:57:03.411923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:54.826 [2024-04-17 14:57:03.411956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:37:54.826 [2024-04-17 14:57:03.412032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:54.826 [2024-04-17 14:57:03.412072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:54.826 [2024-04-17 14:57:03.412105] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:37:54.826 [2024-04-17 14:57:03.412140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:54.826 [2024-04-17 14:57:03.412174] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:37:54.826 [2024-04-17 14:57:03.412248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:37:54.826 [2024-04-17 14:57:03.412282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:37:54.826 [2024-04-17 14:57:03.412315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:54.826 [2024-04-17 14:57:03.412348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:54.826 [2024-04-17 14:57:03.412502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:37:54.826 [2024-04-17 14:57:03.412548] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:54.826 [2024-04-17 14:57:03.412602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:37:54.826 [2024-04-17 14:57:03.412846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:37:54.826 [2024-04-17 14:57:03.412893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:54.826 [2024-04-17 14:57:03.412928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:54.826 [2024-04-17 14:57:03.412963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:37:54.826 [2024-04-17 14:57:03.413046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:54.826 [2024-04-17 14:57:03.413091] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:37:54.826 [2024-04-17 14:57:03.413126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:37:54.826 [2024-04-17 14:57:03.413161] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:54.826 [2024-04-17 14:57:03.413195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:54.826 [2024-04-17 14:57:03.413288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:54.826 [2024-04-17 14:57:03.413371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:54.826 [2024-04-17 14:57:03.413413] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:37:54.826 [2024-04-17 14:57:03.413556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:54.826 [2024-04-17 14:57:03.413606] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:54.826 [2024-04-17 14:57:03.413649] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:54.826 [2024-04-17 14:57:03.413741] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:54.826 [2024-04-17 14:57:03.413904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:54.826 [2024-04-17 14:57:03.413960] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:54.826 [2024-04-17 14:57:03.414209] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:54.826 [2024-04-17 14:57:03.414338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:54.826 [2024-04-17 14:57:03.414416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:54.826 [2024-04-17 14:57:03.414521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:54.826 [2024-04-17 14:57:03.414604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:54.826 [2024-04-17 14:57:03.414746] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:54.826 [2024-04-17 14:57:03.414825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:54.826 [2024-04-17 14:57:03.414925] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:54.826 [2024-04-17 14:57:03.414984] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:37:54.826 [2024-04-17 14:57:03.415154] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:37:54.826 [2024-04-17 14:57:03.415210] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:37:54.826 [2024-04-17 14:57:03.415266] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:37:54.826 [2024-04-17 14:57:03.415322] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:37:54.826 [2024-04-17 14:57:03.415470] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:37:54.826 [2024-04-17 14:57:03.415547] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:37:54.826 [2024-04-17 14:57:03.415605] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:37:54.826 [2024-04-17 14:57:03.415661] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:37:54.826 [2024-04-17 14:57:03.415998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:37:54.826 [2024-04-17 14:57:03.416061] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:37:54.826 [2024-04-17 14:57:03.416119] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:37:54.826 [2024-04-17 14:57:03.416240] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:54.826 [2024-04-17 14:57:03.416298] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:54.826 [2024-04-17 14:57:03.416354] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:54.826 [2024-04-17 14:57:03.416412] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:54.826 [2024-04-17 14:57:03.416469] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:54.826 [2024-04-17 14:57:03.416712] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:54.826 [2024-04-17 14:57:03.416828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:54.826 [2024-04-17 14:57:03.416865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:54.826 [2024-04-17 14:57:03.416902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.661 ms 00:37:54.826 [2024-04-17 14:57:03.416938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.085 [2024-04-17 14:57:03.446523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.085 [2024-04-17 14:57:03.446757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:55.085 [2024-04-17 14:57:03.446856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 29.464 ms 00:37:55.085 [2024-04-17 14:57:03.446900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.085 [2024-04-17 14:57:03.446991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.085 [2024-04-17 14:57:03.447073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:55.085 [2024-04-17 14:57:03.447127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:37:55.085 [2024-04-17 14:57:03.447162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.085 [2024-04-17 14:57:03.508484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.085 [2024-04-17 14:57:03.508721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:55.085 [2024-04-17 14:57:03.508859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 61.209 ms 00:37:55.085 [2024-04-17 14:57:03.508901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.085 [2024-04-17 14:57:03.508995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.085 [2024-04-17 14:57:03.509087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:55.085 [2024-04-17 14:57:03.509128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:55.085 [2024-04-17 14:57:03.509163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.085 [2024-04-17 14:57:03.509393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.085 [2024-04-17 14:57:03.509506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:55.085 [2024-04-17 14:57:03.509588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:37:55.085 [2024-04-17 14:57:03.509629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.085 [2024-04-17 14:57:03.509712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.085 [2024-04-17 14:57:03.509788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:55.085 [2024-04-17 14:57:03.509839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:37:55.085 [2024-04-17 14:57:03.509873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.085 [2024-04-17 14:57:03.538204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.085 [2024-04-17 14:57:03.538409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:55.085 [2024-04-17 14:57:03.538526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 28.279 ms 00:37:55.085 [2024-04-17 14:57:03.538611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.085 [2024-04-17 14:57:03.538826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.085 [2024-04-17 14:57:03.538876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:37:55.085 [2024-04-17 14:57:03.538996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:37:55.085 [2024-04-17 14:57:03.539040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.085 [2024-04-17 14:57:03.569756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.085 [2024-04-17 14:57:03.569942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:37:55.085 [2024-04-17 14:57:03.570029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.654 ms 00:37:55.085 [2024-04-17 14:57:03.570070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.085 [2024-04-17 14:57:03.587113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.085 [2024-04-17 14:57:03.587280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:55.085 [2024-04-17 14:57:03.587365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.445 ms 00:37:55.085 [2024-04-17 14:57:03.587413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.344 [2024-04-17 14:57:03.693965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.344 [2024-04-17 14:57:03.694235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:37:55.344 [2024-04-17 14:57:03.694349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 106.424 ms 00:37:55.344 [2024-04-17 14:57:03.694408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.344 [2024-04-17 14:57:03.694579] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:37:55.344 [2024-04-17 14:57:03.694793] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:37:55.344 [2024-04-17 14:57:03.694881] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:37:55.344 [2024-04-17 14:57:03.694975] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:37:55.344 [2024-04-17 14:57:03.695098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.344 [2024-04-17 14:57:03.695140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:37:55.344 [2024-04-17 14:57:03.695177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.576 ms 00:37:55.344 [2024-04-17 14:57:03.695212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.344 [2024-04-17 14:57:03.695336] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:37:55.344 [2024-04-17 14:57:03.695455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.344 [2024-04-17 14:57:03.695516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:37:55.344 [2024-04-17 14:57:03.695698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.118 ms 00:37:55.344 [2024-04-17 14:57:03.695744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.344 [2024-04-17 14:57:03.725117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.344 [2024-04-17 14:57:03.725331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:37:55.344 [2024-04-17 14:57:03.725438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 29.287 ms 00:37:55.344 [2024-04-17 14:57:03.725480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.344 [2024-04-17 14:57:03.742112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.344 [2024-04-17 14:57:03.742312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:37:55.344 [2024-04-17 14:57:03.742402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:37:55.344 [2024-04-17 14:57:03.742443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.344 [2024-04-17 14:57:03.742595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.344 [2024-04-17 14:57:03.742642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:37:55.344 [2024-04-17 14:57:03.742679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:55.344 [2024-04-17 14:57:03.742785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.344 [2024-04-17 14:57:03.743036] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:37:55.603 [2024-04-17 14:57:04.189084] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:37:55.603 [2024-04-17 14:57:04.189513] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:37:56.170 [2024-04-17 14:57:04.668394] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:37:56.170 [2024-04-17 14:57:04.668789] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:37:56.170 [2024-04-17 14:57:04.668923] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:37:56.170 [2024-04-17 14:57:04.668988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:56.170 [2024-04-17 14:57:04.669081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:37:56.170 [2024-04-17 14:57:04.669128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 926.132 ms 00:37:56.170 [2024-04-17 14:57:04.669165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:56.170 [2024-04-17 14:57:04.669287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:56.170 [2024-04-17 14:57:04.669396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:37:56.170 [2024-04-17 14:57:04.669482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:56.170 [2024-04-17 14:57:04.669540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:56.170 [2024-04-17 14:57:04.686569] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:56.170 [2024-04-17 14:57:04.687038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:56.170 [2024-04-17 14:57:04.687166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:56.170 [2024-04-17 14:57:04.687265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.433 ms 00:37:56.170 [2024-04-17 14:57:04.687307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:56.170 [2024-04-17 14:57:04.688107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:56.170 [2024-04-17 14:57:04.688251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:37:56.170 [2024-04-17 14:57:04.688368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.634 ms 00:37:56.170 [2024-04-17 14:57:04.688411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:56.170 [2024-04-17 14:57:04.691075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:56.170 [2024-04-17 14:57:04.691217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:37:56.170 [2024-04-17 14:57:04.691311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.519 ms 00:37:56.170 [2024-04-17 14:57:04.691354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:56.170 [2024-04-17 14:57:04.739653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:56.170 [2024-04-17 14:57:04.739927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:37:56.170 [2024-04-17 14:57:04.740095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 48.227 ms 00:37:56.170 [2024-04-17 14:57:04.740138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:56.170 [2024-04-17 14:57:04.740479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:56.170 [2024-04-17 14:57:04.740626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:56.170 [2024-04-17 14:57:04.740713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:37:56.170 [2024-04-17 14:57:04.740804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:56.170 [2024-04-17 14:57:04.743412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:56.170 [2024-04-17 14:57:04.743594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:37:56.170 [2024-04-17 14:57:04.743684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.537 ms 00:37:56.170 [2024-04-17 14:57:04.743727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:56.170 [2024-04-17 14:57:04.743832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:56.170 [2024-04-17 14:57:04.743871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:56.170 [2024-04-17 14:57:04.743908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:37:56.170 [2024-04-17 14:57:04.743944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:56.170 [2024-04-17 14:57:04.744077] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:37:56.170 [2024-04-17 14:57:04.744124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:56.170 [2024-04-17 14:57:04.744160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:37:56.170 [2024-04-17 14:57:04.744208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:37:56.170 [2024-04-17 14:57:04.744302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:56.170 [2024-04-17 14:57:04.744402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:56.170 [2024-04-17 14:57:04.744464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:56.170 [2024-04-17 14:57:04.744555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:37:56.170 [2024-04-17 14:57:04.744598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:56.170 [2024-04-17 14:57:04.745809] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1400.513 ms, result 0 00:37:56.170 [2024-04-17 14:57:04.760697] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:56.428 [2024-04-17 14:57:04.776660] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:37:56.428 [2024-04-17 14:57:04.788572] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:56.428 14:57:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:37:56.429 14:57:04 -- common/autotest_common.sh@850 -- # return 0 00:37:56.429 14:57:04 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:56.429 14:57:04 -- ftl/common.sh@95 -- # return 0 00:37:56.429 14:57:04 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:37:56.429 14:57:04 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:37:56.429 14:57:04 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:37:56.429 14:57:04 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:56.429 14:57:04 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:37:56.429 Validate MD5 checksum, iteration 1 00:37:56.429 14:57:04 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:56.429 14:57:04 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:56.429 14:57:04 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:56.429 14:57:04 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:56.429 14:57:04 -- ftl/common.sh@154 -- # return 0 00:37:56.429 14:57:04 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:56.429 [2024-04-17 14:57:04.912040] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:37:56.429 [2024-04-17 14:57:04.912170] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84553 ] 00:37:56.692 [2024-04-17 14:57:05.086673] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:56.951 [2024-04-17 14:57:05.451337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:01.991  Copying: 602/1024 [MB] (602 MBps) Copying: 1024/1024 [MB] (average 589 MBps) 00:38:01.991 00:38:01.991 14:57:10 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:38:01.991 14:57:10 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:03.890 14:57:12 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:03.890 14:57:12 -- ftl/upgrade_shutdown.sh@103 -- # sum=6f083a302d44d26ccfa470ac95126161 00:38:03.890 14:57:12 -- ftl/upgrade_shutdown.sh@105 -- # [[ 6f083a302d44d26ccfa470ac95126161 != \6\f\0\8\3\a\3\0\2\d\4\4\d\2\6\c\c\f\a\4\7\0\a\c\9\5\1\2\6\1\6\1 ]] 00:38:03.890 14:57:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:03.890 14:57:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:03.890 14:57:12 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:38:03.890 Validate MD5 checksum, iteration 2 00:38:03.890 14:57:12 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:03.890 14:57:12 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:03.890 14:57:12 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:03.890 14:57:12 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:03.890 14:57:12 -- ftl/common.sh@154 -- # return 0 00:38:03.890 14:57:12 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:03.890 [2024-04-17 14:57:12.367559] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:38:03.890 [2024-04-17 14:57:12.367700] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84630 ] 00:38:04.148 [2024-04-17 14:57:12.538035] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:04.406 [2024-04-17 14:57:12.813802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:10.311  Copying: 585/1024 [MB] (585 MBps) Copying: 1024/1024 [MB] (average 560 MBps) 00:38:10.311 00:38:10.311 14:57:18 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:38:10.311 14:57:18 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:12.850 14:57:20 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:12.850 14:57:20 -- ftl/upgrade_shutdown.sh@103 -- # sum=fc7aa6ecffde39ca20385a8b5a2377b8 00:38:12.850 14:57:20 -- ftl/upgrade_shutdown.sh@105 -- # [[ fc7aa6ecffde39ca20385a8b5a2377b8 != \f\c\7\a\a\6\e\c\f\f\d\e\3\9\c\a\2\0\3\8\5\a\8\b\5\a\2\3\7\7\b\8 ]] 00:38:12.850 14:57:20 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:12.850 14:57:20 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:12.850 14:57:20 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:38:12.850 14:57:20 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:38:12.850 14:57:20 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:38:12.850 14:57:20 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:12.850 14:57:21 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:38:12.850 14:57:21 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:38:12.850 14:57:21 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:38:12.850 14:57:21 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:38:12.850 14:57:21 -- ftl/common.sh@130 -- # [[ -n 84507 ]] 00:38:12.850 14:57:21 -- ftl/common.sh@131 -- # killprocess 84507 00:38:12.850 14:57:21 -- common/autotest_common.sh@936 -- # '[' -z 84507 ']' 00:38:12.850 14:57:21 -- common/autotest_common.sh@940 -- # kill -0 84507 00:38:12.850 14:57:21 -- common/autotest_common.sh@941 -- # uname 00:38:12.850 14:57:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:38:12.850 14:57:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84507 00:38:12.850 14:57:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:38:12.850 14:57:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:38:12.850 14:57:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84507' 00:38:12.850 killing process with pid 84507 00:38:12.850 14:57:21 -- common/autotest_common.sh@955 -- # kill 84507 00:38:12.850 14:57:21 -- common/autotest_common.sh@960 -- # wait 84507 00:38:14.228 [2024-04-17 14:57:22.422336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:38:14.228 [2024-04-17 14:57:22.455249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.455325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:38:14.228 [2024-04-17 14:57:22.455350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:14.228 [2024-04-17 14:57:22.455372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.455420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:38:14.228 [2024-04-17 14:57:22.460244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.460305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:38:14.228 [2024-04-17 14:57:22.460333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.783 ms 00:38:14.228 [2024-04-17 14:57:22.460345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.460686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.460713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:38:14.228 [2024-04-17 14:57:22.460732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.301 ms 00:38:14.228 [2024-04-17 14:57:22.460747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.462105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.462156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:38:14.228 [2024-04-17 14:57:22.462175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.330 ms 00:38:14.228 [2024-04-17 14:57:22.462200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.463910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.463957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:38:14.228 [2024-04-17 14:57:22.463976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.650 ms 00:38:14.228 [2024-04-17 14:57:22.463992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.480983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.481049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:38:14.228 [2024-04-17 14:57:22.481067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.922 ms 00:38:14.228 [2024-04-17 14:57:22.481106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.491471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.491545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:38:14.228 [2024-04-17 14:57:22.491566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.299 ms 00:38:14.228 [2024-04-17 14:57:22.491579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.491703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.491718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:38:14.228 [2024-04-17 14:57:22.491732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:38:14.228 [2024-04-17 14:57:22.491758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.509870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.509944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:38:14.228 [2024-04-17 14:57:22.509962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.080 ms 00:38:14.228 [2024-04-17 14:57:22.509975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.528772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.528844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:38:14.228 [2024-04-17 14:57:22.528863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.735 ms 00:38:14.228 [2024-04-17 14:57:22.528875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.547676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.547754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:38:14.228 [2024-04-17 14:57:22.547772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.734 ms 00:38:14.228 [2024-04-17 14:57:22.547785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.566346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.566455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:38:14.228 [2024-04-17 14:57:22.566482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.438 ms 00:38:14.228 [2024-04-17 14:57:22.566513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.566581] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:38:14.228 [2024-04-17 14:57:22.566604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:38:14.228 [2024-04-17 14:57:22.566619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:38:14.228 [2024-04-17 14:57:22.566633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:38:14.228 [2024-04-17 14:57:22.566667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:14.228 [2024-04-17 14:57:22.566942] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:38:14.228 [2024-04-17 14:57:22.566954] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2c9d9d3d-ae6d-4354-8d46-21872afd20a6 00:38:14.228 [2024-04-17 14:57:22.566968] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:38:14.228 [2024-04-17 14:57:22.566980] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:38:14.228 [2024-04-17 14:57:22.566991] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:38:14.228 [2024-04-17 14:57:22.567008] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:38:14.228 [2024-04-17 14:57:22.567020] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:38:14.228 [2024-04-17 14:57:22.567040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:38:14.228 [2024-04-17 14:57:22.567067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:38:14.228 [2024-04-17 14:57:22.567078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:38:14.228 [2024-04-17 14:57:22.567090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:38:14.228 [2024-04-17 14:57:22.567110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.567130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:38:14.228 [2024-04-17 14:57:22.567148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:38:14.228 [2024-04-17 14:57:22.567163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.589590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.589658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:38:14.228 [2024-04-17 14:57:22.589677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.369 ms 00:38:14.228 [2024-04-17 14:57:22.589689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.228 [2024-04-17 14:57:22.589977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:14.228 [2024-04-17 14:57:22.589995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:38:14.229 [2024-04-17 14:57:22.590014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.233 ms 00:38:14.229 [2024-04-17 14:57:22.590047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.229 [2024-04-17 14:57:22.674382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.229 [2024-04-17 14:57:22.674469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:38:14.229 [2024-04-17 14:57:22.674509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.229 [2024-04-17 14:57:22.674526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.229 [2024-04-17 14:57:22.674598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.229 [2024-04-17 14:57:22.674615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:38:14.229 [2024-04-17 14:57:22.674631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.229 [2024-04-17 14:57:22.674646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.229 [2024-04-17 14:57:22.674761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.229 [2024-04-17 14:57:22.674780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:38:14.229 [2024-04-17 14:57:22.674796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.229 [2024-04-17 14:57:22.674811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.229 [2024-04-17 14:57:22.674842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.229 [2024-04-17 14:57:22.674859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:38:14.229 [2024-04-17 14:57:22.674875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.229 [2024-04-17 14:57:22.674890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.229 [2024-04-17 14:57:22.820963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.229 [2024-04-17 14:57:22.821036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:38:14.229 [2024-04-17 14:57:22.821054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.229 [2024-04-17 14:57:22.821067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.499 [2024-04-17 14:57:22.879914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.499 [2024-04-17 14:57:22.880007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:38:14.499 [2024-04-17 14:57:22.880034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.499 [2024-04-17 14:57:22.880049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.499 [2024-04-17 14:57:22.880150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.499 [2024-04-17 14:57:22.880165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:38:14.499 [2024-04-17 14:57:22.880178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.499 [2024-04-17 14:57:22.880190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.499 [2024-04-17 14:57:22.880243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.499 [2024-04-17 14:57:22.880267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:38:14.499 [2024-04-17 14:57:22.880280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.499 [2024-04-17 14:57:22.880291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.499 [2024-04-17 14:57:22.880416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.499 [2024-04-17 14:57:22.880432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:38:14.499 [2024-04-17 14:57:22.880444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.499 [2024-04-17 14:57:22.880456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.499 [2024-04-17 14:57:22.880497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.499 [2024-04-17 14:57:22.880541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:38:14.499 [2024-04-17 14:57:22.880560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.499 [2024-04-17 14:57:22.880572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.499 [2024-04-17 14:57:22.880616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.499 [2024-04-17 14:57:22.880630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:38:14.499 [2024-04-17 14:57:22.880642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.499 [2024-04-17 14:57:22.880654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.499 [2024-04-17 14:57:22.880709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:14.499 [2024-04-17 14:57:22.880725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:38:14.499 [2024-04-17 14:57:22.880738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:14.499 [2024-04-17 14:57:22.880749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:14.499 [2024-04-17 14:57:22.880883] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 425.601 ms, result 0 00:38:15.871 14:57:24 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:38:15.871 14:57:24 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:15.871 14:57:24 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:38:15.871 14:57:24 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:38:15.871 14:57:24 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:38:15.871 14:57:24 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:15.871 Remove shared memory files 00:38:15.871 14:57:24 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:38:15.871 14:57:24 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:15.871 14:57:24 -- ftl/common.sh@205 -- # rm -f rm -f 00:38:15.871 14:57:24 -- ftl/common.sh@206 -- # rm -f rm -f 00:38:15.871 14:57:24 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84289 00:38:15.871 14:57:24 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:15.871 14:57:24 -- ftl/common.sh@209 -- # rm -f rm -f 00:38:15.871 00:38:15.871 real 1m41.948s 00:38:15.871 user 2m25.979s 00:38:15.871 sys 0m24.000s 00:38:15.871 14:57:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:15.871 14:57:24 -- common/autotest_common.sh@10 -- # set +x 00:38:15.871 ************************************ 00:38:15.871 END TEST ftl_upgrade_shutdown 00:38:15.871 ************************************ 00:38:16.128 14:57:24 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:38:16.128 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:38:16.128 14:57:24 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:38:16.128 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:38:16.128 14:57:24 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:38:16.128 14:57:24 -- ftl/ftl.sh@14 -- # killprocess 77462 00:38:16.128 14:57:24 -- common/autotest_common.sh@936 -- # '[' -z 77462 ']' 00:38:16.128 14:57:24 -- common/autotest_common.sh@940 -- # kill -0 77462 00:38:16.128 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77462) - No such process 00:38:16.128 Process with pid 77462 is not found 00:38:16.128 14:57:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77462 is not found' 00:38:16.128 14:57:24 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:38:16.128 14:57:24 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84784 00:38:16.128 14:57:24 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:16.128 14:57:24 -- ftl/ftl.sh@20 -- # waitforlisten 84784 00:38:16.128 14:57:24 -- common/autotest_common.sh@817 -- # '[' -z 84784 ']' 00:38:16.128 14:57:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.128 14:57:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:38:16.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.128 14:57:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.128 14:57:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:38:16.128 14:57:24 -- common/autotest_common.sh@10 -- # set +x 00:38:16.128 [2024-04-17 14:57:24.604781] Starting SPDK v24.05-pre git sha1 0fa934e8f / DPDK 23.11.0 initialization... 00:38:16.128 [2024-04-17 14:57:24.605810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84784 ] 00:38:16.385 [2024-04-17 14:57:24.786315] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.643 [2024-04-17 14:57:25.131101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.018 14:57:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:38:18.018 14:57:26 -- common/autotest_common.sh@850 -- # return 0 00:38:18.018 14:57:26 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:18.018 nvme0n1 00:38:18.018 14:57:26 -- ftl/ftl.sh@22 -- # clear_lvols 00:38:18.018 14:57:26 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:18.018 14:57:26 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:18.276 14:57:26 -- ftl/common.sh@28 -- # stores=ad45aea4-f263-4b99-a515-7787d4dc0f0b 00:38:18.276 14:57:26 -- ftl/common.sh@29 -- # for lvs in $stores 00:38:18.276 14:57:26 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ad45aea4-f263-4b99-a515-7787d4dc0f0b 00:38:18.533 14:57:27 -- ftl/ftl.sh@23 -- # killprocess 84784 00:38:18.533 14:57:27 -- common/autotest_common.sh@936 -- # '[' -z 84784 ']' 00:38:18.533 14:57:27 -- common/autotest_common.sh@940 -- # kill -0 84784 00:38:18.790 14:57:27 -- common/autotest_common.sh@941 -- # uname 00:38:18.790 14:57:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:38:18.790 14:57:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84784 00:38:18.790 14:57:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:38:18.790 14:57:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:38:18.790 14:57:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84784' 00:38:18.790 killing process with pid 84784 00:38:18.790 14:57:27 -- common/autotest_common.sh@955 -- # kill 84784 00:38:18.790 14:57:27 -- common/autotest_common.sh@960 -- # wait 84784 00:38:22.071 14:57:30 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:22.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:22.071 Waiting for block devices as requested 00:38:22.071 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:22.071 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:22.071 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:38:22.328 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:38:27.607 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:38:27.607 14:57:35 -- ftl/ftl.sh@28 -- # remove_shm 00:38:27.607 Remove shared memory files 00:38:27.607 14:57:35 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:27.607 14:57:35 -- ftl/common.sh@205 -- # rm -f rm -f 00:38:27.607 14:57:35 -- ftl/common.sh@206 -- # rm -f rm -f 00:38:27.607 14:57:35 -- ftl/common.sh@207 -- # rm -f rm -f 00:38:27.607 14:57:35 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:27.607 14:57:35 -- ftl/common.sh@209 -- # rm -f rm -f 00:38:27.607 00:38:27.607 real 10m59.102s 00:38:27.607 user 13m46.966s 00:38:27.607 sys 1m33.512s 00:38:27.607 14:57:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:38:27.607 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:38:27.607 ************************************ 00:38:27.607 END TEST ftl 00:38:27.607 ************************************ 00:38:27.607 14:57:35 -- spdk/autotest.sh@340 -- # '[' 0 -eq 1 ']' 00:38:27.607 14:57:35 -- spdk/autotest.sh@344 -- # '[' 0 -eq 1 ']' 00:38:27.607 14:57:35 -- spdk/autotest.sh@349 -- # '[' 0 -eq 1 ']' 00:38:27.607 14:57:35 -- spdk/autotest.sh@353 -- # '[' 0 -eq 1 ']' 00:38:27.607 14:57:35 -- spdk/autotest.sh@360 -- # [[ 0 -eq 1 ]] 00:38:27.607 14:57:35 -- spdk/autotest.sh@364 -- # [[ 0 -eq 1 ]] 00:38:27.607 14:57:35 -- spdk/autotest.sh@368 -- # [[ 0 -eq 1 ]] 00:38:27.607 14:57:35 -- spdk/autotest.sh@372 -- # [[ 0 -eq 1 ]] 00:38:27.607 14:57:35 -- spdk/autotest.sh@377 -- # trap - SIGINT SIGTERM EXIT 00:38:27.607 14:57:35 -- spdk/autotest.sh@379 -- # timing_enter post_cleanup 00:38:27.607 14:57:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:38:27.607 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:38:27.607 14:57:35 -- spdk/autotest.sh@380 -- # autotest_cleanup 00:38:27.607 14:57:35 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:38:27.607 14:57:35 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:38:27.607 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:38:28.980 INFO: APP EXITING 00:38:28.980 INFO: killing all VMs 00:38:28.980 INFO: killing vhost app 00:38:28.980 INFO: EXIT DONE 00:38:29.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:29.496 lsblk: /dev/nvme3c3n1: not a block device 00:38:29.754 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:38:29.754 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:38:29.754 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:38:29.754 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:38:30.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:30.577 lsblk: /dev/nvme3c3n1: not a block device 00:38:30.835 Cleaning 00:38:30.835 Removing: /var/run/dpdk/spdk0/config 00:38:30.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:30.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:30.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:30.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:30.835 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:30.835 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:30.835 Removing: /var/run/dpdk/spdk0 00:38:30.835 Removing: /var/run/dpdk/spdk_pid61688 00:38:30.835 Removing: /var/run/dpdk/spdk_pid61953 00:38:30.835 Removing: /var/run/dpdk/spdk_pid62300 00:38:30.835 Removing: /var/run/dpdk/spdk_pid62415 00:38:30.835 Removing: /var/run/dpdk/spdk_pid62530 00:38:30.835 Removing: /var/run/dpdk/spdk_pid62660 00:38:30.835 Removing: /var/run/dpdk/spdk_pid62781 00:38:30.835 Removing: /var/run/dpdk/spdk_pid62831 00:38:30.835 Removing: /var/run/dpdk/spdk_pid62877 00:38:30.835 Removing: /var/run/dpdk/spdk_pid62956 00:38:30.835 Removing: /var/run/dpdk/spdk_pid63071 00:38:30.835 Removing: /var/run/dpdk/spdk_pid63544 00:38:30.836 Removing: /var/run/dpdk/spdk_pid63634 00:38:30.836 Removing: /var/run/dpdk/spdk_pid63723 00:38:30.836 Removing: /var/run/dpdk/spdk_pid63745 00:38:30.836 Removing: /var/run/dpdk/spdk_pid63919 00:38:30.836 Removing: /var/run/dpdk/spdk_pid63946 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64116 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64137 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64213 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64241 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64309 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64338 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64550 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64598 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64684 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64778 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64824 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64922 00:38:30.836 Removing: /var/run/dpdk/spdk_pid64978 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65028 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65079 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65136 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65192 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65248 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65304 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65360 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65417 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65473 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65525 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65581 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65637 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65695 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65750 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65802 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65861 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65920 00:38:30.836 Removing: /var/run/dpdk/spdk_pid65976 00:38:30.836 Removing: /var/run/dpdk/spdk_pid66034 00:38:31.094 Removing: /var/run/dpdk/spdk_pid66127 00:38:31.094 Removing: /var/run/dpdk/spdk_pid66264 00:38:31.094 Removing: /var/run/dpdk/spdk_pid66453 00:38:31.094 Removing: /var/run/dpdk/spdk_pid66563 00:38:31.094 Removing: /var/run/dpdk/spdk_pid66620 00:38:31.094 Removing: /var/run/dpdk/spdk_pid67113 00:38:31.094 Removing: /var/run/dpdk/spdk_pid67226 00:38:31.094 Removing: /var/run/dpdk/spdk_pid67357 00:38:31.094 Removing: /var/run/dpdk/spdk_pid67425 00:38:31.094 Removing: /var/run/dpdk/spdk_pid67461 00:38:31.094 Removing: /var/run/dpdk/spdk_pid67542 00:38:31.094 Removing: /var/run/dpdk/spdk_pid68225 00:38:31.094 Removing: /var/run/dpdk/spdk_pid68282 00:38:31.094 Removing: /var/run/dpdk/spdk_pid68807 00:38:31.094 Removing: /var/run/dpdk/spdk_pid68916 00:38:31.094 Removing: /var/run/dpdk/spdk_pid69046 00:38:31.094 Removing: /var/run/dpdk/spdk_pid69114 00:38:31.094 Removing: /var/run/dpdk/spdk_pid69156 00:38:31.094 Removing: /var/run/dpdk/spdk_pid69191 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71179 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71337 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71346 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71364 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71403 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71407 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71419 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71464 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71468 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71480 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71525 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71529 00:38:31.094 Removing: /var/run/dpdk/spdk_pid71541 00:38:31.094 Removing: /var/run/dpdk/spdk_pid72973 00:38:31.094 Removing: /var/run/dpdk/spdk_pid73094 00:38:31.094 Removing: /var/run/dpdk/spdk_pid73249 00:38:31.094 Removing: /var/run/dpdk/spdk_pid73381 00:38:31.094 Removing: /var/run/dpdk/spdk_pid73518 00:38:31.094 Removing: /var/run/dpdk/spdk_pid73655 00:38:31.094 Removing: /var/run/dpdk/spdk_pid73814 00:38:31.094 Removing: /var/run/dpdk/spdk_pid73899 00:38:31.094 Removing: /var/run/dpdk/spdk_pid74055 00:38:31.094 Removing: /var/run/dpdk/spdk_pid74438 00:38:31.094 Removing: /var/run/dpdk/spdk_pid74490 00:38:31.094 Removing: /var/run/dpdk/spdk_pid75004 00:38:31.094 Removing: /var/run/dpdk/spdk_pid75201 00:38:31.094 Removing: /var/run/dpdk/spdk_pid75305 00:38:31.094 Removing: /var/run/dpdk/spdk_pid75431 00:38:31.094 Removing: /var/run/dpdk/spdk_pid75504 00:38:31.094 Removing: /var/run/dpdk/spdk_pid75540 00:38:31.094 Removing: /var/run/dpdk/spdk_pid75875 00:38:31.094 Removing: /var/run/dpdk/spdk_pid75952 00:38:31.094 Removing: /var/run/dpdk/spdk_pid76038 00:38:31.094 Removing: /var/run/dpdk/spdk_pid76464 00:38:31.094 Removing: /var/run/dpdk/spdk_pid76640 00:38:31.094 Removing: /var/run/dpdk/spdk_pid77462 00:38:31.094 Removing: /var/run/dpdk/spdk_pid77619 00:38:31.094 Removing: /var/run/dpdk/spdk_pid77855 00:38:31.094 Removing: /var/run/dpdk/spdk_pid77958 00:38:31.094 Removing: /var/run/dpdk/spdk_pid78289 00:38:31.094 Removing: /var/run/dpdk/spdk_pid78554 00:38:31.094 Removing: /var/run/dpdk/spdk_pid78924 00:38:31.094 Removing: /var/run/dpdk/spdk_pid79129 00:38:31.094 Removing: /var/run/dpdk/spdk_pid79248 00:38:31.094 Removing: /var/run/dpdk/spdk_pid79334 00:38:31.094 Removing: /var/run/dpdk/spdk_pid79461 00:38:31.094 Removing: /var/run/dpdk/spdk_pid79497 00:38:31.094 Removing: /var/run/dpdk/spdk_pid79572 00:38:31.094 Removing: /var/run/dpdk/spdk_pid79769 00:38:31.094 Removing: /var/run/dpdk/spdk_pid80039 00:38:31.094 Removing: /var/run/dpdk/spdk_pid80386 00:38:31.094 Removing: /var/run/dpdk/spdk_pid80758 00:38:31.094 Removing: /var/run/dpdk/spdk_pid81115 00:38:31.094 Removing: /var/run/dpdk/spdk_pid81552 00:38:31.094 Removing: /var/run/dpdk/spdk_pid81694 00:38:31.094 Removing: /var/run/dpdk/spdk_pid81808 00:38:31.094 Removing: /var/run/dpdk/spdk_pid82379 00:38:31.094 Removing: /var/run/dpdk/spdk_pid82460 00:38:31.094 Removing: /var/run/dpdk/spdk_pid82847 00:38:31.094 Removing: /var/run/dpdk/spdk_pid83206 00:38:31.094 Removing: /var/run/dpdk/spdk_pid83661 00:38:31.094 Removing: /var/run/dpdk/spdk_pid83791 00:38:31.094 Removing: /var/run/dpdk/spdk_pid83855 00:38:31.094 Removing: /var/run/dpdk/spdk_pid83925 00:38:31.094 Removing: /var/run/dpdk/spdk_pid83997 00:38:31.352 Removing: /var/run/dpdk/spdk_pid84068 00:38:31.352 Removing: /var/run/dpdk/spdk_pid84289 00:38:31.352 Removing: /var/run/dpdk/spdk_pid84345 00:38:31.352 Removing: /var/run/dpdk/spdk_pid84422 00:38:31.352 Removing: /var/run/dpdk/spdk_pid84507 00:38:31.352 Removing: /var/run/dpdk/spdk_pid84553 00:38:31.352 Removing: /var/run/dpdk/spdk_pid84630 00:38:31.352 Removing: /var/run/dpdk/spdk_pid84784 00:38:31.352 Clean 00:38:31.352 14:57:39 -- common/autotest_common.sh@1437 -- # return 0 00:38:31.352 14:57:39 -- spdk/autotest.sh@381 -- # timing_exit post_cleanup 00:38:31.352 14:57:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:38:31.352 14:57:39 -- common/autotest_common.sh@10 -- # set +x 00:38:31.352 14:57:39 -- spdk/autotest.sh@383 -- # timing_exit autotest 00:38:31.352 14:57:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:38:31.352 14:57:39 -- common/autotest_common.sh@10 -- # set +x 00:38:31.352 14:57:39 -- spdk/autotest.sh@384 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:31.610 14:57:39 -- spdk/autotest.sh@386 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:38:31.610 14:57:39 -- spdk/autotest.sh@386 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:38:31.610 14:57:39 -- spdk/autotest.sh@388 -- # hash lcov 00:38:31.610 14:57:39 -- spdk/autotest.sh@388 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:38:31.610 14:57:39 -- spdk/autotest.sh@390 -- # hostname 00:38:31.610 14:57:39 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1701806725-069-updated-1701632595 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:38:31.610 geninfo: WARNING: invalid characters removed from testname! 00:38:58.159 14:58:06 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:01.485 14:58:09 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:04.014 14:58:12 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:06.548 14:58:14 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:09.080 14:58:17 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:11.698 14:58:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:14.279 14:58:22 -- spdk/autotest.sh@397 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:14.279 14:58:22 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:14.279 14:58:22 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:14.279 14:58:22 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:14.279 14:58:22 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:14.279 14:58:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.279 14:58:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.279 14:58:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.279 14:58:22 -- paths/export.sh@5 -- $ export PATH 00:39:14.279 14:58:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.279 14:58:22 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:39:14.279 14:58:22 -- common/autobuild_common.sh@435 -- $ date +%s 00:39:14.279 14:58:22 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713365902.XXXXXX 00:39:14.279 14:58:22 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713365902.gWWDbo 00:39:14.279 14:58:22 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:39:14.279 14:58:22 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:39:14.279 14:58:22 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:39:14.279 14:58:22 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:39:14.279 14:58:22 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:39:14.279 14:58:22 -- common/autobuild_common.sh@451 -- $ get_config_params 00:39:14.279 14:58:22 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:39:14.279 14:58:22 -- common/autotest_common.sh@10 -- $ set +x 00:39:14.279 14:58:22 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:39:14.279 14:58:22 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:39:14.279 14:58:22 -- pm/common@17 -- $ local monitor 00:39:14.279 14:58:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:14.279 14:58:22 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=86439 00:39:14.279 14:58:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:14.279 14:58:22 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=86441 00:39:14.279 14:58:22 -- pm/common@26 -- $ sleep 1 00:39:14.279 14:58:22 -- pm/common@21 -- $ date +%s 00:39:14.279 14:58:22 -- pm/common@21 -- $ date +%s 00:39:14.279 14:58:22 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713365902 00:39:14.279 14:58:22 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713365902 00:39:14.279 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713365902_collect-vmstat.pm.log 00:39:14.279 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713365902_collect-cpu-load.pm.log 00:39:15.282 14:58:23 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:39:15.282 14:58:23 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:39:15.282 14:58:23 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:39:15.282 14:58:23 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:39:15.282 14:58:23 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:39:15.282 14:58:23 -- spdk/autopackage.sh@19 -- $ timing_finish 00:39:15.282 14:58:23 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:15.282 14:58:23 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:15.282 14:58:23 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:15.282 14:58:23 -- spdk/autopackage.sh@20 -- $ exit 0 00:39:15.282 14:58:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:15.282 14:58:23 -- pm/common@30 -- $ signal_monitor_resources TERM 00:39:15.282 14:58:23 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:39:15.282 14:58:23 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:15.282 14:58:23 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:39:15.282 14:58:23 -- pm/common@45 -- $ pid=86447 00:39:15.282 14:58:23 -- pm/common@52 -- $ sudo kill -TERM 86447 00:39:15.282 14:58:23 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:15.282 14:58:23 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:39:15.282 14:58:23 -- pm/common@45 -- $ pid=86448 00:39:15.282 14:58:23 -- pm/common@52 -- $ sudo kill -TERM 86448 00:39:15.282 + [[ -n 5129 ]] 00:39:15.282 + sudo kill 5129 00:39:15.291 [Pipeline] } 00:39:15.310 [Pipeline] // timeout 00:39:15.316 [Pipeline] } 00:39:15.333 [Pipeline] // stage 00:39:15.339 [Pipeline] } 00:39:15.356 [Pipeline] // catchError 00:39:15.365 [Pipeline] stage 00:39:15.367 [Pipeline] { (Stop VM) 00:39:15.382 [Pipeline] sh 00:39:15.663 + vagrant halt 00:39:19.849 ==> default: Halting domain... 00:39:26.510 [Pipeline] sh 00:39:26.786 + vagrant destroy -f 00:39:31.010 ==> default: Removing domain... 00:39:31.278 [Pipeline] sh 00:39:31.558 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:39:31.566 [Pipeline] } 00:39:31.583 [Pipeline] // stage 00:39:31.589 [Pipeline] } 00:39:31.605 [Pipeline] // dir 00:39:31.610 [Pipeline] } 00:39:31.626 [Pipeline] // wrap 00:39:31.631 [Pipeline] } 00:39:31.644 [Pipeline] // catchError 00:39:31.650 [Pipeline] stage 00:39:31.652 [Pipeline] { (Epilogue) 00:39:31.661 [Pipeline] sh 00:39:31.936 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:38.579 [Pipeline] catchError 00:39:38.581 [Pipeline] { 00:39:38.596 [Pipeline] sh 00:39:38.879 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:39.138 Artifacts sizes are good 00:39:39.148 [Pipeline] } 00:39:39.165 [Pipeline] // catchError 00:39:39.176 [Pipeline] archiveArtifacts 00:39:39.205 Archiving artifacts 00:39:39.334 [Pipeline] cleanWs 00:39:39.342 [WS-CLEANUP] Deleting project workspace... 00:39:39.342 [WS-CLEANUP] Deferred wipeout is used... 00:39:39.347 [WS-CLEANUP] done 00:39:39.349 [Pipeline] } 00:39:39.364 [Pipeline] // stage 00:39:39.369 [Pipeline] } 00:39:39.382 [Pipeline] // node 00:39:39.385 [Pipeline] End of Pipeline 00:39:39.417 Finished: SUCCESS